2025-01-16 14:26:00.193609 | Job console starting... 2025-01-16 14:26:00.217132 | Updating repositories 2025-01-16 14:26:00.323310 | Preparing job workspace 2025-01-16 14:26:02.012527 | Running Ansible setup... 2025-01-16 14:26:06.935876 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-01-16 14:26:07.662890 | 2025-01-16 14:26:07.663059 | PLAY [Base pre] 2025-01-16 14:26:07.694998 | 2025-01-16 14:26:07.695151 | TASK [Setup log path fact] 2025-01-16 14:26:07.740512 | orchestrator | ok 2025-01-16 14:26:07.764023 | 2025-01-16 14:26:07.764180 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-01-16 14:26:07.799202 | orchestrator | skipping: Conditional result was False 2025-01-16 14:26:07.810077 | 2025-01-16 14:26:07.810255 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-01-16 14:26:07.859435 | orchestrator | ok 2025-01-16 14:26:07.870745 | 2025-01-16 14:26:07.870870 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-01-16 14:26:07.915737 | orchestrator | skipping: Conditional result was False 2025-01-16 14:26:07.927296 | 2025-01-16 14:26:07.927431 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-01-16 14:26:07.941976 | orchestrator | skipping: Conditional result was False 2025-01-16 14:26:07.950647 | 2025-01-16 14:26:07.950756 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-01-16 14:26:07.974695 | orchestrator | skipping: Conditional result was False 2025-01-16 14:26:07.983021 | 2025-01-16 14:26:07.983136 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-01-16 14:26:08.007174 | orchestrator | skipping: Conditional result was False 2025-01-16 14:26:08.024710 | 2025-01-16 14:26:08.024835 | TASK [emit-job-header : Print job information] 2025-01-16 14:26:08.095043 | # Job Information 2025-01-16 14:26:08.095286 | Ansible Version: 2.15.3 2025-01-16 14:26:08.095339 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-01-16 14:26:08.095388 | Pipeline: post 2025-01-16 14:26:08.095422 | Executor: 7d211f194f6a 2025-01-16 14:26:08.095453 | Triggered by: https://github.com/osism/testbed/commit/c652dfdad113b7916719545e84025c99003c1c24 2025-01-16 14:26:08.095483 | Event ID: ca45ea52-d415-11ef-9c4e-616e4c1d086b 2025-01-16 14:26:08.104809 | 2025-01-16 14:26:08.104924 | LOOP [emit-job-header : Print node information] 2025-01-16 14:26:08.255416 | orchestrator | ok: 2025-01-16 14:26:08.255639 | orchestrator | # Node Information 2025-01-16 14:26:08.255680 | orchestrator | Inventory Hostname: orchestrator 2025-01-16 14:26:08.255709 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-01-16 14:26:08.255736 | orchestrator | Username: zuul-testbed03 2025-01-16 14:26:08.255761 | orchestrator | Distro: Debian 12.9 2025-01-16 14:26:08.255784 | orchestrator | Provider: static-testbed 2025-01-16 14:26:08.255807 | orchestrator | Label: testbed-orchestrator 2025-01-16 14:26:08.255829 | orchestrator | Product Name: OpenStack Nova 2025-01-16 14:26:08.255853 | orchestrator | Interface IP: 81.163.193.140 2025-01-16 14:26:08.282359 | 2025-01-16 14:26:08.282491 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-01-16 14:26:08.739050 | orchestrator -> localhost | changed 2025-01-16 14:26:08.748942 | 2025-01-16 14:26:08.749059 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-01-16 14:26:09.783966 | orchestrator -> localhost | changed 2025-01-16 14:26:09.806146 | 2025-01-16 14:26:09.806272 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-01-16 14:26:10.098163 | orchestrator -> localhost | ok 2025-01-16 14:26:10.107053 | 2025-01-16 14:26:10.107177 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-01-16 14:26:10.139415 | orchestrator | ok 2025-01-16 14:26:10.157003 | orchestrator | included: /var/lib/zuul/builds/a5ad1afc3d714be699c9c1da1fba5829/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-01-16 14:26:10.166136 | 2025-01-16 14:26:10.166255 | TASK [add-build-sshkey : Create Temp SSH key] 2025-01-16 14:26:11.213656 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-01-16 14:26:11.214194 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a5ad1afc3d714be699c9c1da1fba5829/work/a5ad1afc3d714be699c9c1da1fba5829_id_rsa 2025-01-16 14:26:11.214277 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a5ad1afc3d714be699c9c1da1fba5829/work/a5ad1afc3d714be699c9c1da1fba5829_id_rsa.pub 2025-01-16 14:26:11.214329 | orchestrator -> localhost | The key fingerprint is: 2025-01-16 14:26:11.214379 | orchestrator -> localhost | SHA256:oNVksJeO5j4SQ3xMVJuzMsBRnlRt+cs3CbNS7zJU0mo zuul-build-sshkey 2025-01-16 14:26:11.214428 | orchestrator -> localhost | The key's randomart image is: 2025-01-16 14:26:11.214479 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-01-16 14:26:11.214524 | orchestrator -> localhost | | .o=+=. . | 2025-01-16 14:26:11.214566 | orchestrator -> localhost | | . +.* ++ | 2025-01-16 14:26:11.214700 | orchestrator -> localhost | | .oo* B. . . | 2025-01-16 14:26:11.214744 | orchestrator -> localhost | | o+o= o * o | 2025-01-16 14:26:11.214785 | orchestrator -> localhost | | ...= S o X . | 2025-01-16 14:26:11.214826 | orchestrator -> localhost | | oo o . E = | 2025-01-16 14:26:11.214867 | orchestrator -> localhost | | o. + o . | 2025-01-16 14:26:11.214911 | orchestrator -> localhost | | ... o . | 2025-01-16 14:26:11.214953 | orchestrator -> localhost | | ... o | 2025-01-16 14:26:11.214994 | orchestrator -> localhost | +----[SHA256]-----+ 2025-01-16 14:26:11.215103 | orchestrator -> localhost | ok: Runtime: 0:00:00.538486 2025-01-16 14:26:11.233011 | 2025-01-16 14:26:11.233187 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-01-16 14:26:11.271447 | orchestrator | ok 2025-01-16 14:26:11.287843 | orchestrator | included: /var/lib/zuul/builds/a5ad1afc3d714be699c9c1da1fba5829/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-01-16 14:26:11.298894 | 2025-01-16 14:26:11.299000 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-01-16 14:26:11.324335 | orchestrator | skipping: Conditional result was False 2025-01-16 14:26:11.337053 | 2025-01-16 14:26:11.337167 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-01-16 14:26:11.972443 | orchestrator | changed 2025-01-16 14:26:11.983729 | 2025-01-16 14:26:11.983853 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-01-16 14:26:12.281810 | orchestrator | ok 2025-01-16 14:26:12.330794 | 2025-01-16 14:26:12.330946 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-01-16 14:26:12.770184 | orchestrator | ok 2025-01-16 14:26:12.780357 | 2025-01-16 14:26:12.780487 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-01-16 14:26:13.204155 | orchestrator | ok 2025-01-16 14:26:13.213818 | 2025-01-16 14:26:13.214012 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-01-16 14:26:13.250468 | orchestrator | skipping: Conditional result was False 2025-01-16 14:26:13.270561 | 2025-01-16 14:26:13.270766 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-01-16 14:26:13.686302 | orchestrator -> localhost | changed 2025-01-16 14:26:13.703475 | 2025-01-16 14:26:13.703619 | TASK [add-build-sshkey : Add back temp key] 2025-01-16 14:26:14.025521 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a5ad1afc3d714be699c9c1da1fba5829/work/a5ad1afc3d714be699c9c1da1fba5829_id_rsa (zuul-build-sshkey) 2025-01-16 14:26:14.026206 | orchestrator -> localhost | ok: Runtime: 0:00:00.014876 2025-01-16 14:26:14.045570 | 2025-01-16 14:26:14.045822 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-01-16 14:26:14.435637 | orchestrator | ok 2025-01-16 14:26:14.445238 | 2025-01-16 14:26:14.445353 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-01-16 14:26:14.480828 | orchestrator | skipping: Conditional result was False 2025-01-16 14:26:14.510688 | 2025-01-16 14:26:14.510819 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-01-16 14:26:14.933194 | orchestrator | ok 2025-01-16 14:26:14.949412 | 2025-01-16 14:26:14.949534 | TASK [validate-host : Define zuul_info_dir fact] 2025-01-16 14:26:14.996835 | orchestrator | ok 2025-01-16 14:26:15.006182 | 2025-01-16 14:26:15.006314 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-01-16 14:26:15.347438 | orchestrator -> localhost | ok 2025-01-16 14:26:15.357628 | 2025-01-16 14:26:15.357748 | TASK [validate-host : Collect information about the host] 2025-01-16 14:26:16.625143 | orchestrator | ok 2025-01-16 14:26:16.642884 | 2025-01-16 14:26:16.643012 | TASK [validate-host : Sanitize hostname] 2025-01-16 14:26:16.732753 | orchestrator | ok 2025-01-16 14:26:16.741697 | 2025-01-16 14:26:16.741826 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-01-16 14:26:17.275480 | orchestrator -> localhost | changed 2025-01-16 14:26:17.283984 | 2025-01-16 14:26:17.284192 | TASK [validate-host : Collect information about zuul worker] 2025-01-16 14:26:17.841811 | orchestrator | ok 2025-01-16 14:26:17.852234 | 2025-01-16 14:26:17.852369 | TASK [validate-host : Write out all zuul information for each host] 2025-01-16 14:26:18.397443 | orchestrator -> localhost | changed 2025-01-16 14:26:18.420237 | 2025-01-16 14:26:18.420370 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-01-16 14:26:18.744541 | orchestrator | ok 2025-01-16 14:26:18.752759 | 2025-01-16 14:26:18.752876 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-01-16 14:27:02.535991 | orchestrator | changed: 2025-01-16 14:27:02.536216 | orchestrator | .d..t...... src/ 2025-01-16 14:27:02.536270 | orchestrator | .d..t...... src/github.com/ 2025-01-16 14:27:02.536310 | orchestrator | .d..t...... src/github.com/osism/ 2025-01-16 14:27:02.536344 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-01-16 14:27:02.536378 | orchestrator | RedHat.yml 2025-01-16 14:27:02.555719 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-01-16 14:27:02.555737 | orchestrator | RedHat.yml 2025-01-16 14:27:02.555795 | orchestrator | = 1.53.0"... 2025-01-16 14:27:13.801445 | orchestrator | 14:27:13.801 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-01-16 14:27:13.858891 | orchestrator | 14:27:13.858 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-01-16 14:27:15.102240 | orchestrator | 14:27:15.100 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-01-16 14:27:16.334430 | orchestrator | 14:27:16.334 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-01-16 14:27:17.186590 | orchestrator | 14:27:17.186 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-01-16 14:27:18.094453 | orchestrator | 14:27:18.094 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-01-16 14:27:18.977829 | orchestrator | 14:27:18.977 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-01-16 14:27:19.784878 | orchestrator | 14:27:19.784 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-01-16 14:27:19.784985 | orchestrator | 14:27:19.784 STDOUT terraform: Providers are signed by their developers. 2025-01-16 14:27:19.784996 | orchestrator | 14:27:19.784 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-01-16 14:27:19.785016 | orchestrator | 14:27:19.784 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-01-16 14:27:19.785025 | orchestrator | 14:27:19.784 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-01-16 14:27:19.785082 | orchestrator | 14:27:19.785 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-01-16 14:27:19.785140 | orchestrator | 14:27:19.785 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-01-16 14:27:19.785161 | orchestrator | 14:27:19.785 STDOUT terraform: you run "tofu init" in the future. 2025-01-16 14:27:19.785209 | orchestrator | 14:27:19.785 STDOUT terraform: OpenTofu has been successfully initialized! 2025-01-16 14:27:19.785266 | orchestrator | 14:27:19.785 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-01-16 14:27:19.785321 | orchestrator | 14:27:19.785 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-01-16 14:27:19.785341 | orchestrator | 14:27:19.785 STDOUT terraform: should now work. 2025-01-16 14:27:19.785397 | orchestrator | 14:27:19.785 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-01-16 14:27:19.785467 | orchestrator | 14:27:19.785 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-01-16 14:27:19.785507 | orchestrator | 14:27:19.785 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-01-16 14:27:20.131656 | orchestrator | 14:27:20.131 STDOUT terraform: Created and switched to workspace "ci"! 2025-01-16 14:27:20.131732 | orchestrator | 14:27:20.131 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-01-16 14:27:20.131759 | orchestrator | 14:27:20.131 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-01-16 14:27:20.131772 | orchestrator | 14:27:20.131 STDOUT terraform: for this configuration. 2025-01-16 14:27:20.417276 | orchestrator | 14:27:20.416 STDOUT terraform: ci.auto.tfvars 2025-01-16 14:27:21.594781 | orchestrator | 14:27:21.593 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-01-16 14:27:22.106287 | orchestrator | 14:27:22.105 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-01-16 14:27:22.426770 | orchestrator | 14:27:22.426 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-01-16 14:27:22.426941 | orchestrator | 14:27:22.426 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-01-16 14:27:22.427017 | orchestrator | 14:27:22.426 STDOUT terraform:  + create 2025-01-16 14:27:22.427046 | orchestrator | 14:27:22.426 STDOUT terraform:  <= read (data resources) 2025-01-16 14:27:22.427099 | orchestrator | 14:27:22.426 STDOUT terraform: OpenTofu will perform the following actions: 2025-01-16 14:27:22.427122 | orchestrator | 14:27:22.426 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-01-16 14:27:22.427142 | orchestrator | 14:27:22.426 STDOUT terraform:  # (config refers to values not yet known) 2025-01-16 14:27:22.427261 | orchestrator | 14:27:22.426 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-01-16 14:27:22.427282 | orchestrator | 14:27:22.427 STDOUT terraform:  + checksum = (known after apply) 2025-01-16 14:27:22.427297 | orchestrator | 14:27:22.427 STDOUT terraform:  + created_at = (known after apply) 2025-01-16 14:27:22.427316 | orchestrator | 14:27:22.427 STDOUT terraform:  + file = (known after apply) 2025-01-16 14:27:22.427359 | orchestrator | 14:27:22.427 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.427401 | orchestrator | 14:27:22.427 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.427416 | orchestrator | 14:27:22.427 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-01-16 14:27:22.427434 | orchestrator | 14:27:22.427 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-01-16 14:27:22.427451 | orchestrator | 14:27:22.427 STDOUT terraform:  + most_recent = true 2025-01-16 14:27:22.427469 | orchestrator | 14:27:22.427 STDOUT terraform:  + name = (known after apply) 2025-01-16 14:27:22.427512 | orchestrator | 14:27:22.427 STDOUT terraform:  + protected = (known after apply) 2025-01-16 14:27:22.427532 | orchestrator | 14:27:22.427 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.427600 | orchestrator | 14:27:22.427 STDOUT terraform:  + schema = (known after apply) 2025-01-16 14:27:22.427643 | orchestrator | 14:27:22.427 STDOUT terraform:  + size_bytes = (known after apply) 2025-01-16 14:27:22.427662 | orchestrator | 14:27:22.427 STDOUT terraform:  + tags = (known after apply) 2025-01-16 14:27:22.427688 | orchestrator | 14:27:22.427 STDOUT terraform:  + updated_at = (known after apply) 2025-01-16 14:27:22.427726 | orchestrator | 14:27:22.427 STDOUT terraform:  } 2025-01-16 14:27:22.427793 | orchestrator | 14:27:22.427 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-01-16 14:27:22.427816 | orchestrator | 14:27:22.427 STDOUT terraform:  # (config refers to values not yet known) 2025-01-16 14:27:22.427831 | orchestrator | 14:27:22.427 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-01-16 14:27:22.427849 | orchestrator | 14:27:22.427 STDOUT terraform:  + checksum = (known after apply) 2025-01-16 14:27:22.427869 | orchestrator | 14:27:22.427 STDOUT terraform:  + created_at = (known after apply) 2025-01-16 14:27:22.427887 | orchestrator | 14:27:22.427 STDOUT terraform:  + file = (known after apply) 2025-01-16 14:27:22.427939 | orchestrator | 14:27:22.427 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.428011 | orchestrator | 14:27:22.427 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.428031 | orchestrator | 14:27:22.427 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-01-16 14:27:22.428124 | orchestrator | 14:27:22.428 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-01-16 14:27:22.428164 | orchestrator | 14:27:22.428 STDOUT terraform:  + most_recent = true 2025-01-16 14:27:22.428230 | orchestrator | 14:27:22.428 STDOUT terraform:  + name = (known after apply) 2025-01-16 14:27:22.428866 | orchestrator | 14:27:22.428 STDOUT terraform:  + protected = (known after apply) 2025-01-16 14:27:22.428962 | orchestrator | 14:27:22.428 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.429012 | orchestrator | 14:27:22.428 STDOUT terraform:  + schema = (known after apply) 2025-01-16 14:27:22.429028 | orchestrator | 14:27:22.428 STDOUT terraform:  + size_bytes = (known after apply) 2025-01-16 14:27:22.429042 | orchestrator | 14:27:22.428 STDOUT terraform:  + tags = (known after apply) 2025-01-16 14:27:22.429076 | orchestrator | 14:27:22.428 STDOUT terraform:  + updated_at = (known after apply) 2025-01-16 14:27:22.429091 | orchestrator | 14:27:22.428 STDOUT terraform:  } 2025-01-16 14:27:22.429107 | orchestrator | 14:27:22.428 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-01-16 14:27:22.429121 | orchestrator | 14:27:22.428 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-01-16 14:27:22.429135 | orchestrator | 14:27:22.428 STDOUT terraform:  + content = (known after apply) 2025-01-16 14:27:22.429149 | orchestrator | 14:27:22.428 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-01-16 14:27:22.429163 | orchestrator | 14:27:22.428 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-01-16 14:27:22.429177 | orchestrator | 14:27:22.428 STDOUT terraform:  + content_md5 = (known after apply) 2025-01-16 14:27:22.429228 | orchestrator | 14:27:22.428 STDOUT terraform:  + content_sha1 = (known after apply) 2025-01-16 14:27:22.429243 | orchestrator | 14:27:22.428 STDOUT terraform:  + content_sha256 = (known after apply) 2025-01-16 14:27:22.429257 | orchestrator | 14:27:22.428 STDOUT terraform:  + content_sha512 = (known after apply) 2025-01-16 14:27:22.429275 | orchestrator | 14:27:22.428 STDOUT terraform:  + directory_permission = "0777" 2025-01-16 14:27:22.429327 | orchestrator | 14:27:22.429 STDOUT terraform:  + file_permission = "0644" 2025-01-16 14:27:22.429360 | orchestrator | 14:27:22.429 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-01-16 14:27:22.429395 | orchestrator | 14:27:22.429 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.429409 | orchestrator | 14:27:22.429 STDOUT terraform:  } 2025-01-16 14:27:22.429423 | orchestrator | 14:27:22.429 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-01-16 14:27:22.429443 | orchestrator | 14:27:22.429 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-01-16 14:27:22.434391 | orchestrator | 14:27:22.429 STDOUT terraform:  + content = (known after apply) 2025-01-16 14:27:22.434517 | orchestrator | 14:27:22.429 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-01-16 14:27:22.434533 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-01-16 14:27:22.434545 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_md5 = (known after apply) 2025-01-16 14:27:22.434558 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_sha1 = (known after apply) 2025-01-16 14:27:22.434574 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_sha256 = (known after apply) 2025-01-16 14:27:22.434614 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_sha512 = (known after apply) 2025-01-16 14:27:22.434632 | orchestrator | 14:27:22.434 STDOUT terraform:  + directory_permission = "0777" 2025-01-16 14:27:22.434693 | orchestrator | 14:27:22.434 STDOUT terraform:  + file_permission = "0644" 2025-01-16 14:27:22.434710 | orchestrator | 14:27:22.434 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-01-16 14:27:22.434723 | orchestrator | 14:27:22.434 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.434739 | orchestrator | 14:27:22.434 STDOUT terraform:  } 2025-01-16 14:27:22.434755 | orchestrator | 14:27:22.434 STDOUT terraform:  # local_file.inventory will be created 2025-01-16 14:27:22.434794 | orchestrator | 14:27:22.434 STDOUT terraform:  + resource "local_file" "inventory" { 2025-01-16 14:27:22.434842 | orchestrator | 14:27:22.434 STDOUT terraform:  + content = (known after apply) 2025-01-16 14:27:22.434875 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-01-16 14:27:22.434936 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-01-16 14:27:22.434988 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_md5 = (known after apply) 2025-01-16 14:27:22.435004 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_sha1 = (known after apply) 2025-01-16 14:27:22.435053 | orchestrator | 14:27:22.434 STDOUT terraform:  + content_sha256 = (known after apply) 2025-01-16 14:27:22.435115 | orchestrator | 14:27:22.435 STDOUT terraform:  + content_sha512 = (known after apply) 2025-01-16 14:27:22.435155 | orchestrator | 14:27:22.435 STDOUT terraform:  + directory_permission = "0777" 2025-01-16 14:27:22.435171 | orchestrator | 14:27:22.435 STDOUT terraform:  + file_permission = "0644" 2025-01-16 14:27:22.435295 | orchestrator | 14:27:22.435 STDOUT terraform:  + filename = "inventory.ci" 2025-01-16 14:27:22.435312 | orchestrator | 14:27:22.435 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.435342 | orchestrator | 14:27:22.435 STDOUT terraform:  } 2025-01-16 14:27:22.435359 | orchestrator | 14:27:22.435 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-01-16 14:27:22.435399 | orchestrator | 14:27:22.435 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-01-16 14:27:22.435416 | orchestrator | 14:27:22.435 STDOUT terraform:  + content = (sensitive value) 2025-01-16 14:27:22.435432 | orchestrator | 14:27:22.435 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-01-16 14:27:22.435495 | orchestrator | 14:27:22.435 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-01-16 14:27:22.435514 | orchestrator | 14:27:22.435 STDOUT terraform:  + content_md5 = (known after apply) 2025-01-16 14:27:22.435589 | orchestrator | 14:27:22.435 STDOUT terraform:  + content_sha1 = (known after apply) 2025-01-16 14:27:22.435607 | orchestrator | 14:27:22.435 STDOUT terraform:  + content_sha256 = (known after apply) 2025-01-16 14:27:22.435670 | orchestrator | 14:27:22.435 STDOUT terraform:  + content_sha512 = (known after apply) 2025-01-16 14:27:22.435688 | orchestrator | 14:27:22.435 STDOUT terraform:  + directory_permission = "0700" 2025-01-16 14:27:22.435717 | orchestrator | 14:27:22.435 STDOUT terraform:  + file_permission = "0600" 2025-01-16 14:27:22.435735 | orchestrator | 14:27:22.435 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-01-16 14:27:22.435793 | orchestrator | 14:27:22.435 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.435861 | orchestrator | 14:27:22.435 STDOUT terraform:  } 2025-01-16 14:27:22.435880 | orchestrator | 14:27:22.435 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-01-16 14:27:22.435893 | orchestrator | 14:27:22.435 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-01-16 14:27:22.435909 | orchestrator | 14:27:22.435 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.436001 | orchestrator | 14:27:22.435 STDOUT terraform:  } 2025-01-16 14:27:22.436036 | orchestrator | 14:27:22.435 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-01-16 14:27:22.436053 | orchestrator | 14:27:22.435 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-01-16 14:27:22.436097 | orchestrator | 14:27:22.436 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.436114 | orchestrator | 14:27:22.436 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.436200 | orchestrator | 14:27:22.436 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.436258 | orchestrator | 14:27:22.436 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.436276 | orchestrator | 14:27:22.436 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.436292 | orchestrator | 14:27:22.436 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-01-16 14:27:22.436331 | orchestrator | 14:27:22.436 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.436347 | orchestrator | 14:27:22.436 STDOUT terraform:  + size = 80 2025-01-16 14:27:22.436397 | orchestrator | 14:27:22.436 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.436746 | orchestrator | 14:27:22.436 STDOUT terraform:  } 2025-01-16 14:27:22.436777 | orchestrator | 14:27:22.436 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-01-16 14:27:22.437179 | orchestrator | 14:27:22.436 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-01-16 14:27:22.437359 | orchestrator | 14:27:22.437 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.437662 | orchestrator | 14:27:22.437 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.437837 | orchestrator | 14:27:22.437 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.438006 | orchestrator | 14:27:22.437 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.438174 | orchestrator | 14:27:22.437 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.438432 | orchestrator | 14:27:22.438 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-01-16 14:27:22.438610 | orchestrator | 14:27:22.438 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.438777 | orchestrator | 14:27:22.438 STDOUT terraform:  + size = 80 2025-01-16 14:27:22.438797 | orchestrator | 14:27:22.438 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.438865 | orchestrator | 14:27:22.438 STDOUT terraform:  } 2025-01-16 14:27:22.439131 | orchestrator | 14:27:22.438 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-01-16 14:27:22.439361 | orchestrator | 14:27:22.439 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-01-16 14:27:22.439536 | orchestrator | 14:27:22.439 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.439643 | orchestrator | 14:27:22.439 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.439789 | orchestrator | 14:27:22.439 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.439868 | orchestrator | 14:27:22.439 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.439952 | orchestrator | 14:27:22.439 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.440079 | orchestrator | 14:27:22.439 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-01-16 14:27:22.440230 | orchestrator | 14:27:22.440 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.440303 | orchestrator | 14:27:22.440 STDOUT terraform:  + size = 80 2025-01-16 14:27:22.440382 | orchestrator | 14:27:22.440 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.440445 | orchestrator | 14:27:22.440 STDOUT terraform:  } 2025-01-16 14:27:22.440584 | orchestrator | 14:27:22.440 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-01-16 14:27:22.440702 | orchestrator | 14:27:22.440 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-01-16 14:27:22.440785 | orchestrator | 14:27:22.440 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.440851 | orchestrator | 14:27:22.440 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.440981 | orchestrator | 14:27:22.440 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.441079 | orchestrator | 14:27:22.441 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.441205 | orchestrator | 14:27:22.441 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.441311 | orchestrator | 14:27:22.441 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-01-16 14:27:22.441396 | orchestrator | 14:27:22.441 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.441462 | orchestrator | 14:27:22.441 STDOUT terraform:  + size = 80 2025-01-16 14:27:22.441527 | orchestrator | 14:27:22.441 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.441577 | orchestrator | 14:27:22.441 STDOUT terraform:  } 2025-01-16 14:27:22.441695 | orchestrator | 14:27:22.441 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-01-16 14:27:22.441816 | orchestrator | 14:27:22.441 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-01-16 14:27:22.441901 | orchestrator | 14:27:22.441 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.441969 | orchestrator | 14:27:22.441 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.442078 | orchestrator | 14:27:22.441 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.442165 | orchestrator | 14:27:22.442 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.442299 | orchestrator | 14:27:22.442 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.442394 | orchestrator | 14:27:22.442 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-01-16 14:27:22.442487 | orchestrator | 14:27:22.442 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.442540 | orchestrator | 14:27:22.442 STDOUT terraform:  + size = 80 2025-01-16 14:27:22.442593 | orchestrator | 14:27:22.442 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.442637 | orchestrator | 14:27:22.442 STDOUT terraform:  } 2025-01-16 14:27:22.442735 | orchestrator | 14:27:22.442 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-01-16 14:27:22.442830 | orchestrator | 14:27:22.442 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-01-16 14:27:22.442899 | orchestrator | 14:27:22.442 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.442950 | orchestrator | 14:27:22.442 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.443019 | orchestrator | 14:27:22.442 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.443089 | orchestrator | 14:27:22.443 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.443156 | orchestrator | 14:27:22.443 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.443255 | orchestrator | 14:27:22.443 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-01-16 14:27:22.443327 | orchestrator | 14:27:22.443 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.443379 | orchestrator | 14:27:22.443 STDOUT terraform:  + size = 80 2025-01-16 14:27:22.443450 | orchestrator | 14:27:22.443 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.443491 | orchestrator | 14:27:22.443 STDOUT terraform:  } 2025-01-16 14:27:22.443589 | orchestrator | 14:27:22.443 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-01-16 14:27:22.443684 | orchestrator | 14:27:22.443 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-01-16 14:27:22.443766 | orchestrator | 14:27:22.443 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.443819 | orchestrator | 14:27:22.443 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.443889 | orchestrator | 14:27:22.443 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.443958 | orchestrator | 14:27:22.443 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.444026 | orchestrator | 14:27:22.443 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.444108 | orchestrator | 14:27:22.444 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-01-16 14:27:22.444179 | orchestrator | 14:27:22.444 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.444326 | orchestrator | 14:27:22.444 STDOUT terraform:  + size = 80 2025-01-16 14:27:22.444379 | orchestrator | 14:27:22.444 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.444417 | orchestrator | 14:27:22.444 STDOUT terraform:  } 2025-01-16 14:27:22.444506 | orchestrator | 14:27:22.444 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-01-16 14:27:22.444589 | orchestrator | 14:27:22.444 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.444661 | orchestrator | 14:27:22.444 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.444710 | orchestrator | 14:27:22.444 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.444776 | orchestrator | 14:27:22.444 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.444838 | orchestrator | 14:27:22.444 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.444917 | orchestrator | 14:27:22.444 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-01-16 14:27:22.444983 | orchestrator | 14:27:22.444 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.445090 | orchestrator | 14:27:22.445 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.445171 | orchestrator | 14:27:22.445 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.445254 | orchestrator | 14:27:22.445 STDOUT terraform:  } 2025-01-16 14:27:22.445386 | orchestrator | 14:27:22.445 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-01-16 14:27:22.445483 | orchestrator | 14:27:22.445 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.445546 | orchestrator | 14:27:22.445 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.445593 | orchestrator | 14:27:22.445 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.445662 | orchestrator | 14:27:22.445 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.445735 | orchestrator | 14:27:22.445 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.445812 | orchestrator | 14:27:22.445 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-01-16 14:27:22.445878 | orchestrator | 14:27:22.445 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.445940 | orchestrator | 14:27:22.445 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.445991 | orchestrator | 14:27:22.445 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.446062 | orchestrator | 14:27:22.446 STDOUT terraform:  } 2025-01-16 14:27:22.446151 | orchestrator | 14:27:22.446 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-01-16 14:27:22.446254 | orchestrator | 14:27:22.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.446319 | orchestrator | 14:27:22.446 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.446365 | orchestrator | 14:27:22.446 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.446427 | orchestrator | 14:27:22.446 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.446492 | orchestrator | 14:27:22.446 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.446565 | orchestrator | 14:27:22.446 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-01-16 14:27:22.446627 | orchestrator | 14:27:22.446 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.446673 | orchestrator | 14:27:22.446 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.446726 | orchestrator | 14:27:22.446 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.446764 | orchestrator | 14:27:22.446 STDOUT terraform:  } 2025-01-16 14:27:22.446847 | orchestrator | 14:27:22.446 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-01-16 14:27:22.446930 | orchestrator | 14:27:22.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.446991 | orchestrator | 14:27:22.446 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.447036 | orchestrator | 14:27:22.447 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.447100 | orchestrator | 14:27:22.447 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.447162 | orchestrator | 14:27:22.447 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.447283 | orchestrator | 14:27:22.447 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-01-16 14:27:22.447345 | orchestrator | 14:27:22.447 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.447386 | orchestrator | 14:27:22.447 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.447432 | orchestrator | 14:27:22.447 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.447483 | orchestrator | 14:27:22.447 STDOUT terraform:  } 2025-01-16 14:27:22.447603 | orchestrator | 14:27:22.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-01-16 14:27:22.447699 | orchestrator | 14:27:22.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.447768 | orchestrator | 14:27:22.447 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.447824 | orchestrator | 14:27:22.447 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.447884 | orchestrator | 14:27:22.447 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.447940 | orchestrator | 14:27:22.447 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.448023 | orchestrator | 14:27:22.447 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-01-16 14:27:22.448087 | orchestrator | 14:27:22.448 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.448130 | orchestrator | 14:27:22.448 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.448172 | orchestrator | 14:27:22.448 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.448223 | orchestrator | 14:27:22.448 STDOUT terraform:  } 2025-01-16 14:27:22.448300 | orchestrator | 14:27:22.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-01-16 14:27:22.448373 | orchestrator | 14:27:22.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.448436 | orchestrator | 14:27:22.448 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.448479 | orchestrator | 14:27:22.448 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.448534 | orchestrator | 14:27:22.448 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.448595 | orchestrator | 14:27:22.448 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.448661 | orchestrator | 14:27:22.448 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-01-16 14:27:22.448715 | orchestrator | 14:27:22.448 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.448756 | orchestrator | 14:27:22.448 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.448797 | orchestrator | 14:27:22.448 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.448830 | orchestrator | 14:27:22.448 STDOUT terraform:  } 2025-01-16 14:27:22.448902 | orchestrator | 14:27:22.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-01-16 14:27:22.448975 | orchestrator | 14:27:22.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.449032 | orchestrator | 14:27:22.448 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.449078 | orchestrator | 14:27:22.449 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.449138 | orchestrator | 14:27:22.449 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.449246 | orchestrator | 14:27:22.449 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.449319 | orchestrator | 14:27:22.449 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-01-16 14:27:22.449377 | orchestrator | 14:27:22.449 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.449418 | orchestrator | 14:27:22.449 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.449463 | orchestrator | 14:27:22.449 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.449505 | orchestrator | 14:27:22.449 STDOUT terraform:  } 2025-01-16 14:27:22.449583 | orchestrator | 14:27:22.449 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-01-16 14:27:22.449659 | orchestrator | 14:27:22.449 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.449717 | orchestrator | 14:27:22.449 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.449761 | orchestrator | 14:27:22.449 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.449812 | orchestrator | 14:27:22.449 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.449861 | orchestrator | 14:27:22.449 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.449919 | orchestrator | 14:27:22.449 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-01-16 14:27:22.449971 | orchestrator | 14:27:22.449 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.450027 | orchestrator | 14:27:22.449 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.450071 | orchestrator | 14:27:22.450 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.450101 | orchestrator | 14:27:22.450 STDOUT terraform:  } 2025-01-16 14:27:22.450168 | orchestrator | 14:27:22.450 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-01-16 14:27:22.450249 | orchestrator | 14:27:22.450 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.450320 | orchestrator | 14:27:22.450 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.450364 | orchestrator | 14:27:22.450 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.450417 | orchestrator | 14:27:22.450 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.450467 | orchestrator | 14:27:22.450 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.450527 | orchestrator | 14:27:22.450 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-01-16 14:27:22.450575 | orchestrator | 14:27:22.450 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.450612 | orchestrator | 14:27:22.450 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.450650 | orchestrator | 14:27:22.450 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.450678 | orchestrator | 14:27:22.450 STDOUT terraform:  } 2025-01-16 14:27:22.450810 | orchestrator | 14:27:22.450 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-01-16 14:27:22.450875 | orchestrator | 14:27:22.450 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.450923 | orchestrator | 14:27:22.450 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.450959 | orchestrator | 14:27:22.450 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.451007 | orchestrator | 14:27:22.450 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.451055 | orchestrator | 14:27:22.451 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.451112 | orchestrator | 14:27:22.451 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-01-16 14:27:22.451175 | orchestrator | 14:27:22.451 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.451232 | orchestrator | 14:27:22.451 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.451270 | orchestrator | 14:27:22.451 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.451297 | orchestrator | 14:27:22.451 STDOUT terraform:  } 2025-01-16 14:27:22.451366 | orchestrator | 14:27:22.451 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-01-16 14:27:22.451432 | orchestrator | 14:27:22.451 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.451481 | orchestrator | 14:27:22.451 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.451517 | orchestrator | 14:27:22.451 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.451566 | orchestrator | 14:27:22.451 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.451617 | orchestrator | 14:27:22.451 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.451674 | orchestrator | 14:27:22.451 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-01-16 14:27:22.451724 | orchestrator | 14:27:22.451 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.451761 | orchestrator | 14:27:22.451 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.451796 | orchestrator | 14:27:22.451 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.451834 | orchestrator | 14:27:22.451 STDOUT terraform:  } 2025-01-16 14:27:22.451900 | orchestrator | 14:27:22.451 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-01-16 14:27:22.451964 | orchestrator | 14:27:22.451 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.452013 | orchestrator | 14:27:22.451 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.452049 | orchestrator | 14:27:22.452 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.452098 | orchestrator | 14:27:22.452 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.452146 | orchestrator | 14:27:22.452 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.452246 | orchestrator | 14:27:22.452 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-01-16 14:27:22.452301 | orchestrator | 14:27:22.452 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.452339 | orchestrator | 14:27:22.452 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.452378 | orchestrator | 14:27:22.452 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.452406 | orchestrator | 14:27:22.452 STDOUT terraform:  } 2025-01-16 14:27:22.452474 | orchestrator | 14:27:22.452 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-01-16 14:27:22.452537 | orchestrator | 14:27:22.452 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.452583 | orchestrator | 14:27:22.452 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.452629 | orchestrator | 14:27:22.452 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.452676 | orchestrator | 14:27:22.452 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.452723 | orchestrator | 14:27:22.452 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.452778 | orchestrator | 14:27:22.452 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-01-16 14:27:22.452826 | orchestrator | 14:27:22.452 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.452862 | orchestrator | 14:27:22.452 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.452897 | orchestrator | 14:27:22.452 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.452923 | orchestrator | 14:27:22.452 STDOUT terraform:  } 2025-01-16 14:27:22.452986 | orchestrator | 14:27:22.452 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-01-16 14:27:22.453047 | orchestrator | 14:27:22.452 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.453094 | orchestrator | 14:27:22.453 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.453129 | orchestrator | 14:27:22.453 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.453177 | orchestrator | 14:27:22.453 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.453264 | orchestrator | 14:27:22.453 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.453324 | orchestrator | 14:27:22.453 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-01-16 14:27:22.453372 | orchestrator | 14:27:22.453 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.453410 | orchestrator | 14:27:22.453 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.453447 | orchestrator | 14:27:22.453 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.453475 | orchestrator | 14:27:22.453 STDOUT terraform:  } 2025-01-16 14:27:22.453539 | orchestrator | 14:27:22.453 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-01-16 14:27:22.453600 | orchestrator | 14:27:22.453 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.453646 | orchestrator | 14:27:22.453 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.453681 | orchestrator | 14:27:22.453 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.453727 | orchestrator | 14:27:22.453 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.453772 | orchestrator | 14:27:22.453 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.453828 | orchestrator | 14:27:22.453 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-01-16 14:27:22.453876 | orchestrator | 14:27:22.453 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.453909 | orchestrator | 14:27:22.453 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.453943 | orchestrator | 14:27:22.453 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.453969 | orchestrator | 14:27:22.453 STDOUT terraform:  } 2025-01-16 14:27:22.454052 | orchestrator | 14:27:22.453 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-01-16 14:27:22.454125 | orchestrator | 14:27:22.454 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.454174 | orchestrator | 14:27:22.454 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.454234 | orchestrator | 14:27:22.454 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.454285 | orchestrator | 14:27:22.454 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.454335 | orchestrator | 14:27:22.454 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.454390 | orchestrator | 14:27:22.454 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-01-16 14:27:22.454436 | orchestrator | 14:27:22.454 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.454473 | orchestrator | 14:27:22.454 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.454510 | orchestrator | 14:27:22.454 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.454538 | orchestrator | 14:27:22.454 STDOUT terraform:  } 2025-01-16 14:27:22.454604 | orchestrator | 14:27:22.454 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-01-16 14:27:22.454665 | orchestrator | 14:27:22.454 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.454710 | orchestrator | 14:27:22.454 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.454744 | orchestrator | 14:27:22.454 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.454792 | orchestrator | 14:27:22.454 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.454839 | orchestrator | 14:27:22.454 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.454892 | orchestrator | 14:27:22.454 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-01-16 14:27:22.454940 | orchestrator | 14:27:22.454 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.454975 | orchestrator | 14:27:22.454 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.455009 | orchestrator | 14:27:22.454 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.455037 | orchestrator | 14:27:22.455 STDOUT terraform:  } 2025-01-16 14:27:22.455098 | orchestrator | 14:27:22.455 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-01-16 14:27:22.455160 | orchestrator | 14:27:22.455 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-01-16 14:27:22.455225 | orchestrator | 14:27:22.455 STDOUT terraform:  + attachment = (known after apply) 2025-01-16 14:27:22.455261 | orchestrator | 14:27:22.455 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.455307 | orchestrator | 14:27:22.455 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.455352 | orchestrator | 14:27:22.455 STDOUT terraform:  + metadata = (known after apply) 2025-01-16 14:27:22.455424 | orchestrator | 14:27:22.455 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-01-16 14:27:22.455481 | orchestrator | 14:27:22.455 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.455526 | orchestrator | 14:27:22.455 STDOUT terraform:  + size = 20 2025-01-16 14:27:22.455561 | orchestrator | 14:27:22.455 STDOUT terraform:  + volume_type = "ssd" 2025-01-16 14:27:22.455587 | orchestrator | 14:27:22.455 STDOUT terraform:  } 2025-01-16 14:27:22.455648 | orchestrator | 14:27:22.455 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-01-16 14:27:22.455712 | orchestrator | 14:27:22.455 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-01-16 14:27:22.455766 | orchestrator | 14:27:22.455 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-01-16 14:27:22.455818 | orchestrator | 14:27:22.455 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-01-16 14:27:22.455868 | orchestrator | 14:27:22.455 STDOUT terraform:  + all_metadata = (known after apply) 2025-01-16 14:27:22.455921 | orchestrator | 14:27:22.455 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.455957 | orchestrator | 14:27:22.455 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.455992 | orchestrator | 14:27:22.455 STDOUT terraform:  + config_drive = true 2025-01-16 14:27:22.456042 | orchestrator | 14:27:22.456 STDOUT terraform:  + created = (known after apply) 2025-01-16 14:27:22.456092 | orchestrator | 14:27:22.456 STDOUT terraform:  + flavor_id = (known after apply) 2025-01-16 14:27:22.456138 | orchestrator | 14:27:22.456 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-01-16 14:27:22.456235 | orchestrator | 14:27:22.456 STDOUT terraform:  + force_delete = false 2025-01-16 14:27:22.456299 | orchestrator | 14:27:22.456 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.456352 | orchestrator | 14:27:22.456 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.456405 | orchestrator | 14:27:22.456 STDOUT terraform:  + image_name = (known after apply) 2025-01-16 14:27:22.456446 | orchestrator | 14:27:22.456 STDOUT terraform:  + key_pair = "testbed" 2025-01-16 14:27:22.456492 | orchestrator | 14:27:22.456 STDOUT terraform:  + name = "testbed-manager" 2025-01-16 14:27:22.456533 | orchestrator | 14:27:22.456 STDOUT terraform:  + power_state = "active" 2025-01-16 14:27:22.456584 | orchestrator | 14:27:22.456 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.456631 | orchestrator | 14:27:22.456 STDOUT terraform:  + security_groups = (known after apply) 2025-01-16 14:27:22.456667 | orchestrator | 14:27:22.456 STDOUT terraform:  + stop_before_destroy = false 2025-01-16 14:27:22.456715 | orchestrator | 14:27:22.456 STDOUT terraform:  + updated = (known after apply) 2025-01-16 14:27:22.456764 | orchestrator | 14:27:22.456 STDOUT terraform:  + user_data = (known after apply) 2025-01-16 14:27:22.456793 | orchestrator | 14:27:22.456 STDOUT terraform:  + block_device { 2025-01-16 14:27:22.456829 | orchestrator | 14:27:22.456 STDOUT terraform:  + boot_index = 0 2025-01-16 14:27:22.456874 | orchestrator | 14:27:22.456 STDOUT terraform:  + delete_on_termination = false 2025-01-16 14:27:22.456915 | orchestrator | 14:27:22.456 STDOUT terraform:  + destination_type = "volume" 2025-01-16 14:27:22.456963 | orchestrator | 14:27:22.456 STDOUT terraform:  + multiattach = false 2025-01-16 14:27:22.457007 | orchestrator | 14:27:22.456 STDOUT terraform:  + source_type = "volume" 2025-01-16 14:27:22.457057 | orchestrator | 14:27:22.457 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.457087 | orchestrator | 14:27:22.457 STDOUT terraform:  } 2025-01-16 14:27:22.457113 | orchestrator | 14:27:22.457 STDOUT terraform:  + network { 2025-01-16 14:27:22.457145 | orchestrator | 14:27:22.457 STDOUT terraform:  + access_network = false 2025-01-16 14:27:22.457207 | orchestrator | 14:27:22.457 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-01-16 14:27:22.457250 | orchestrator | 14:27:22.457 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-01-16 14:27:22.457294 | orchestrator | 14:27:22.457 STDOUT terraform:  + mac = (known after apply) 2025-01-16 14:27:22.457337 | orchestrator | 14:27:22.457 STDOUT terraform:  + name = (known after apply) 2025-01-16 14:27:22.457378 | orchestrator | 14:27:22.457 STDOUT terraform:  + port = (known after apply) 2025-01-16 14:27:22.457422 | orchestrator | 14:27:22.457 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.457447 | orchestrator | 14:27:22.457 STDOUT terraform:  } 2025-01-16 14:27:22.457470 | orchestrator | 14:27:22.457 STDOUT terraform:  } 2025-01-16 14:27:22.457523 | orchestrator | 14:27:22.457 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-01-16 14:27:22.457576 | orchestrator | 14:27:22.457 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-01-16 14:27:22.457623 | orchestrator | 14:27:22.457 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-01-16 14:27:22.457670 | orchestrator | 14:27:22.457 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-01-16 14:27:22.457740 | orchestrator | 14:27:22.457 STDOUT terraform:  + all_metadata = (known after apply) 2025-01-16 14:27:22.457817 | orchestrator | 14:27:22.457 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.457856 | orchestrator | 14:27:22.457 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.457890 | orchestrator | 14:27:22.457 STDOUT terraform:  + config_drive = true 2025-01-16 14:27:22.457939 | orchestrator | 14:27:22.457 STDOUT terraform:  + created = (known after apply) 2025-01-16 14:27:22.457985 | orchestrator | 14:27:22.457 STDOUT terraform:  + flavor_id = (known after apply) 2025-01-16 14:27:22.458053 | orchestrator | 14:27:22.457 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-01-16 14:27:22.458091 | orchestrator | 14:27:22.458 STDOUT terraform:  + force_delete = false 2025-01-16 14:27:22.458141 | orchestrator | 14:27:22.458 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.458207 | orchestrator | 14:27:22.458 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.458256 | orchestrator | 14:27:22.458 STDOUT terraform:  + image_name = (known after apply) 2025-01-16 14:27:22.458293 | orchestrator | 14:27:22.458 STDOUT terraform:  + key_pair = "testbed" 2025-01-16 14:27:22.458341 | orchestrator | 14:27:22.458 STDOUT terraform:  + name = "testbed-node-0" 2025-01-16 14:27:22.458377 | orchestrator | 14:27:22.458 STDOUT terraform:  + power_state = "active" 2025-01-16 14:27:22.458426 | orchestrator | 14:27:22.458 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.458472 | orchestrator | 14:27:22.458 STDOUT terraform:  + security_groups = (known after apply) 2025-01-16 14:27:22.458505 | orchestrator | 14:27:22.458 STDOUT terraform:  + stop_before_destroy = false 2025-01-16 14:27:22.458550 | orchestrator | 14:27:22.458 STDOUT terraform:  + updated = (known after apply) 2025-01-16 14:27:22.458614 | orchestrator | 14:27:22.458 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-01-16 14:27:22.458643 | orchestrator | 14:27:22.458 STDOUT terraform:  + block_device { 2025-01-16 14:27:22.458678 | orchestrator | 14:27:22.458 STDOUT terraform:  + boot_index = 0 2025-01-16 14:27:22.458721 | orchestrator | 14:27:22.458 STDOUT terraform:  + delete_on_termination = false 2025-01-16 14:27:22.458761 | orchestrator | 14:27:22.458 STDOUT terraform:  + destination_type = "volume" 2025-01-16 14:27:22.458800 | orchestrator | 14:27:22.458 STDOUT terraform:  + multiattach = false 2025-01-16 14:27:22.458842 | orchestrator | 14:27:22.458 STDOUT terraform:  + source_type = "volume" 2025-01-16 14:27:22.458891 | orchestrator | 14:27:22.458 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.458914 | orchestrator | 14:27:22.458 STDOUT terraform:  } 2025-01-16 14:27:22.458938 | orchestrator | 14:27:22.458 STDOUT terraform:  + network { 2025-01-16 14:27:22.458968 | orchestrator | 14:27:22.458 STDOUT terraform:  + access_network = false 2025-01-16 14:27:22.459009 | orchestrator | 14:27:22.458 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-01-16 14:27:22.459054 | orchestrator | 14:27:22.459 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-01-16 14:27:22.459097 | orchestrator | 14:27:22.459 STDOUT terraform:  + mac = (known after apply) 2025-01-16 14:27:22.459138 | orchestrator | 14:27:22.459 STDOUT terraform:  + name = (known after apply) 2025-01-16 14:27:22.459220 | orchestrator | 14:27:22.459 STDOUT terraform:  + port = (known after apply) 2025-01-16 14:27:22.459269 | orchestrator | 14:27:22.459 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.459293 | orchestrator | 14:27:22.459 STDOUT terraform:  } 2025-01-16 14:27:22.459318 | orchestrator | 14:27:22.459 STDOUT terraform:  } 2025-01-16 14:27:22.459372 | orchestrator | 14:27:22.459 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-01-16 14:27:22.459427 | orchestrator | 14:27:22.459 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-01-16 14:27:22.459472 | orchestrator | 14:27:22.459 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-01-16 14:27:22.459533 | orchestrator | 14:27:22.459 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-01-16 14:27:22.459580 | orchestrator | 14:27:22.459 STDOUT terraform:  + all_metadata = (known after apply) 2025-01-16 14:27:22.459634 | orchestrator | 14:27:22.459 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.459670 | orchestrator | 14:27:22.459 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.459701 | orchestrator | 14:27:22.459 STDOUT terraform:  + config_drive = true 2025-01-16 14:27:22.459748 | orchestrator | 14:27:22.459 STDOUT terraform:  + created = (known after apply) 2025-01-16 14:27:22.459790 | orchestrator | 14:27:22.459 STDOUT terraform:  + flavor_id = (known after apply) 2025-01-16 14:27:22.459831 | orchestrator | 14:27:22.459 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-01-16 14:27:22.459863 | orchestrator | 14:27:22.459 STDOUT terraform:  + force_delete = false 2025-01-16 14:27:22.459906 | orchestrator | 14:27:22.459 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.459948 | orchestrator | 14:27:22.459 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.459990 | orchestrator | 14:27:22.459 STDOUT terraform:  + image_name = (known after apply) 2025-01-16 14:27:22.460023 | orchestrator | 14:27:22.459 STDOUT terraform:  + key_pair = "testbed" 2025-01-16 14:27:22.460059 | orchestrator | 14:27:22.460 STDOUT terraform:  + name = "testbed-node-1" 2025-01-16 14:27:22.460091 | orchestrator | 14:27:22.460 STDOUT terraform:  + power_state = "active" 2025-01-16 14:27:22.460132 | orchestrator | 14:27:22.460 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.460173 | orchestrator | 14:27:22.460 STDOUT terraform:  + security_groups = (known after apply) 2025-01-16 14:27:22.460221 | orchestrator | 14:27:22.460 STDOUT terraform:  + stop_before_destroy = false 2025-01-16 14:27:22.460266 | orchestrator | 14:27:22.460 STDOUT terraform:  + updated = (known after apply) 2025-01-16 14:27:22.460334 | orchestrator | 14:27:22.460 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-01-16 14:27:22.460374 | orchestrator | 14:27:22.460 STDOUT terraform:  + block_device { 2025-01-16 14:27:22.460409 | orchestrator | 14:27:22.460 STDOUT terraform:  + boot_index = 0 2025-01-16 14:27:22.460464 | orchestrator | 14:27:22.460 STDOUT terraform:  + delete_on_termination = false 2025-01-16 14:27:22.460567 | orchestrator | 14:27:22.460 STDOUT terraform:  + destination_type = "volume" 2025-01-16 14:27:22.460626 | orchestrator | 14:27:22.460 STDOUT terraform:  + multiattach = false 2025-01-16 14:27:22.460682 | orchestrator | 14:27:22.460 STDOUT terraform:  + source_type = "volume" 2025-01-16 14:27:22.460731 | orchestrator | 14:27:22.460 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.460769 | orchestrator | 14:27:22.460 STDOUT terraform:  } 2025-01-16 14:27:22.460795 | orchestrator | 14:27:22.460 STDOUT terraform:  + network { 2025-01-16 14:27:22.460842 | orchestrator | 14:27:22.460 STDOUT terraform:  + access_network = false 2025-01-16 14:27:22.460882 | orchestrator | 14:27:22.460 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-01-16 14:27:22.460937 | orchestrator | 14:27:22.460 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-01-16 14:27:22.461001 | orchestrator | 14:27:22.460 STDOUT terraform:  + mac = (known after apply) 2025-01-16 14:27:22.461042 | orchestrator | 14:27:22.461 STDOUT terraform:  + name = (known after apply) 2025-01-16 14:27:22.461096 | orchestrator | 14:27:22.461 STDOUT terraform:  + port = (known after apply) 2025-01-16 14:27:22.461151 | orchestrator | 14:27:22.461 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.461177 | orchestrator | 14:27:22.461 STDOUT terraform:  } 2025-01-16 14:27:22.461229 | orchestrator | 14:27:22.461 STDOUT terraform:  } 2025-01-16 14:27:22.461289 | orchestrator | 14:27:22.461 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-01-16 14:27:22.461345 | orchestrator | 14:27:22.461 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-01-16 14:27:22.461402 | orchestrator | 14:27:22.461 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-01-16 14:27:22.461453 | orchestrator | 14:27:22.461 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-01-16 14:27:22.461508 | orchestrator | 14:27:22.461 STDOUT terraform:  + all_metadata = (known after apply) 2025-01-16 14:27:22.461568 | orchestrator | 14:27:22.461 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.461602 | orchestrator | 14:27:22.461 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.461644 | orchestrator | 14:27:22.461 STDOUT terraform:  + config_drive = true 2025-01-16 14:27:22.461700 | orchestrator | 14:27:22.461 STDOUT terraform:  + created = (known after apply) 2025-01-16 14:27:22.461749 | orchestrator | 14:27:22.461 STDOUT terraform:  + flavor_id = (known after apply) 2025-01-16 14:27:22.461802 | orchestrator | 14:27:22.461 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-01-16 14:27:22.461835 | orchestrator | 14:27:22.461 STDOUT terraform:  + force_delete = false 2025-01-16 14:27:22.461896 | orchestrator | 14:27:22.461 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.461954 | orchestrator | 14:27:22.461 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.461999 | orchestrator | 14:27:22.461 STDOUT terraform:  + image_name = (known after apply) 2025-01-16 14:27:22.462060 | orchestrator | 14:27:22.462 STDOUT terraform:  + key_pair = "testbed" 2025-01-16 14:27:22.462115 | orchestrator | 14:27:22.462 STDOUT terraform:  + name = "testbed-node-2" 2025-01-16 14:27:22.462149 | orchestrator | 14:27:22.462 STDOUT terraform:  + power_state = "active" 2025-01-16 14:27:22.462269 | orchestrator | 14:27:22.462 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.462330 | orchestrator | 14:27:22.462 STDOUT terraform:  + security_groups = (known after apply) 2025-01-16 14:27:22.462365 | orchestrator | 14:27:22.462 STDOUT terraform:  + stop_before_destroy = false 2025-01-16 14:27:22.462426 | orchestrator | 14:27:22.462 STDOUT terraform:  + updated = (known after apply) 2025-01-16 14:27:22.462500 | orchestrator | 14:27:22.462 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-01-16 14:27:22.462527 | orchestrator | 14:27:22.462 STDOUT terraform:  + block_device { 2025-01-16 14:27:22.462586 | orchestrator | 14:27:22.462 STDOUT terraform:  + boot_index = 0 2025-01-16 14:27:22.462633 | orchestrator | 14:27:22.462 STDOUT terraform:  + delete_on_termination = false 2025-01-16 14:27:22.462680 | orchestrator | 14:27:22.462 STDOUT terraform:  + destination_type = "volume" 2025-01-16 14:27:22.462733 | orchestrator | 14:27:22.462 STDOUT terraform:  + multiattach = false 2025-01-16 14:27:22.462775 | orchestrator | 14:27:22.462 STDOUT terraform:  + source_type = "volume" 2025-01-16 14:27:22.462836 | orchestrator | 14:27:22.462 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.462861 | orchestrator | 14:27:22.462 STDOUT terraform:  } 2025-01-16 14:27:22.462898 | orchestrator | 14:27:22.462 STDOUT terraform:  + network { 2025-01-16 14:27:22.462928 | orchestrator | 14:27:22.462 STDOUT terraform:  + access_network = false 2025-01-16 14:27:22.462982 | orchestrator | 14:27:22.462 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-01-16 14:27:22.463030 | orchestrator | 14:27:22.462 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-01-16 14:27:22.463076 | orchestrator | 14:27:22.463 STDOUT terraform:  + mac = (known after apply) 2025-01-16 14:27:22.463132 | orchestrator | 14:27:22.463 STDOUT terraform:  + name = (known after apply) 2025-01-16 14:27:22.463173 | orchestrator | 14:27:22.463 STDOUT terraform:  + port = (known after apply) 2025-01-16 14:27:22.463260 | orchestrator | 14:27:22.463 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.463303 | orchestrator | 14:27:22.463 STDOUT terraform:  } 2025-01-16 14:27:22.463328 | orchestrator | 14:27:22.463 STDOUT terraform:  } 2025-01-16 14:27:22.463395 | orchestrator | 14:27:22.463 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-01-16 14:27:22.463461 | orchestrator | 14:27:22.463 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-01-16 14:27:22.463568 | orchestrator | 14:27:22.463 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-01-16 14:27:22.463627 | orchestrator | 14:27:22.463 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-01-16 14:27:22.463687 | orchestrator | 14:27:22.463 STDOUT terraform:  + all_metadata = (known after apply) 2025-01-16 14:27:22.463736 | orchestrator | 14:27:22.463 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.463785 | orchestrator | 14:27:22.463 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.463816 | orchestrator | 14:27:22.463 STDOUT terraform:  + config_drive = true 2025-01-16 14:27:22.463875 | orchestrator | 14:27:22.463 STDOUT terraform:  + created = (known after apply) 2025-01-16 14:27:22.463932 | orchestrator | 14:27:22.463 STDOUT terraform:  + flavor_id = (known after apply) 2025-01-16 14:27:22.463987 | orchestrator | 14:27:22.463 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-01-16 14:27:22.464028 | orchestrator | 14:27:22.464 STDOUT terraform:  + force_delete = false 2025-01-16 14:27:22.464086 | orchestrator | 14:27:22.464 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.464140 | orchestrator | 14:27:22.464 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.464268 | orchestrator | 14:27:22.464 STDOUT terraform:  + image_name = (known after apply) 2025-01-16 14:27:22.464326 | orchestrator | 14:27:22.464 STDOUT terraform:  + key_pair = "testbed" 2025-01-16 14:27:22.464369 | orchestrator | 14:27:22.464 STDOUT terraform:  + name = "testbed-node-3" 2025-01-16 14:27:22.464417 | orchestrator | 14:27:22.464 STDOUT terraform:  + power_state = "active" 2025-01-16 14:27:22.464469 | orchestrator | 14:27:22.464 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.464517 | orchestrator | 14:27:22.464 STDOUT terraform:  + security_groups = (known after apply) 2025-01-16 14:27:22.464558 | orchestrator | 14:27:22.464 STDOUT terraform:  + stop_before_destroy = false 2025-01-16 14:27:22.464609 | orchestrator | 14:27:22.464 STDOUT terraform:  + updated = (known after apply) 2025-01-16 14:27:22.464684 | orchestrator | 14:27:22.464 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-01-16 14:27:22.464714 | orchestrator | 14:27:22.464 STDOUT terraform:  + block_device { 2025-01-16 14:27:22.464749 | orchestrator | 14:27:22.464 STDOUT terraform:  + boot_index = 0 2025-01-16 14:27:22.464787 | orchestrator | 14:27:22.464 STDOUT terraform:  + delete_on_termination = false 2025-01-16 14:27:22.464844 | orchestrator | 14:27:22.464 STDOUT terraform:  + destination_type = "volume" 2025-01-16 14:27:22.464881 | orchestrator | 14:27:22.464 STDOUT terraform:  + multiattach = false 2025-01-16 14:27:22.464920 | orchestrator | 14:27:22.464 STDOUT terraform:  + source_type = "volume" 2025-01-16 14:27:22.464986 | orchestrator | 14:27:22.464 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.465037 | orchestrator | 14:27:22.464 STDOUT terraform:  } 2025-01-16 14:27:22.465063 | orchestrator | 14:27:22.465 STDOUT terraform:  + network { 2025-01-16 14:27:22.465094 | orchestrator | 14:27:22.465 STDOUT terraform:  + access_network = false 2025-01-16 14:27:22.465134 | orchestrator | 14:27:22.465 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-01-16 14:27:22.465204 | orchestrator | 14:27:22.465 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-01-16 14:27:22.465247 | orchestrator | 14:27:22.465 STDOUT terraform:  + mac = (known after apply) 2025-01-16 14:27:22.465286 | orchestrator | 14:27:22.465 STDOUT terraform:  + name = (known after apply) 2025-01-16 14:27:22.465326 | orchestrator | 14:27:22.465 STDOUT terraform:  + port = (known after apply) 2025-01-16 14:27:22.465366 | orchestrator | 14:27:22.465 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.465388 | orchestrator | 14:27:22.465 STDOUT terraform:  } 2025-01-16 14:27:22.465422 | orchestrator | 14:27:22.465 STDOUT terraform:  } 2025-01-16 14:27:22.465476 | orchestrator | 14:27:22.465 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-01-16 14:27:22.465525 | orchestrator | 14:27:22.465 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-01-16 14:27:22.465567 | orchestrator | 14:27:22.465 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-01-16 14:27:22.465615 | orchestrator | 14:27:22.465 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-01-16 14:27:22.465656 | orchestrator | 14:27:22.465 STDOUT terraform:  + all_metadata = (known after apply) 2025-01-16 14:27:22.465698 | orchestrator | 14:27:22.465 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.465731 | orchestrator | 14:27:22.465 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.465759 | orchestrator | 14:27:22.465 STDOUT terraform:  + config_drive = true 2025-01-16 14:27:22.465800 | orchestrator | 14:27:22.465 STDOUT terraform:  + created = (known after apply) 2025-01-16 14:27:22.465843 | orchestrator | 14:27:22.465 STDOUT terraform:  + flavor_id = (known after apply) 2025-01-16 14:27:22.465880 | orchestrator | 14:27:22.465 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-01-16 14:27:22.465910 | orchestrator | 14:27:22.465 STDOUT terraform:  + force_delete = false 2025-01-16 14:27:22.465953 | orchestrator | 14:27:22.465 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.465995 | orchestrator | 14:27:22.465 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.466053 | orchestrator | 14:27:22.466 STDOUT terraform:  + image_name = (known after apply) 2025-01-16 14:27:22.466091 | orchestrator | 14:27:22.466 STDOUT terraform:  + key_pair = "testbed" 2025-01-16 14:27:22.466148 | orchestrator | 14:27:22.466 STDOUT terraform:  + name = "testbed-node-4" 2025-01-16 14:27:22.466199 | orchestrator | 14:27:22.466 STDOUT terraform:  + power_state = "active" 2025-01-16 14:27:22.466245 | orchestrator | 14:27:22.466 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.466287 | orchestrator | 14:27:22.466 STDOUT terraform:  + security_groups = (known after apply) 2025-01-16 14:27:22.466319 | orchestrator | 14:27:22.466 STDOUT terraform:  + stop_before_destroy = false 2025-01-16 14:27:22.466362 | orchestrator | 14:27:22.466 STDOUT terraform:  + updated = (known after apply) 2025-01-16 14:27:22.466423 | orchestrator | 14:27:22.466 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-01-16 14:27:22.466450 | orchestrator | 14:27:22.466 STDOUT terraform:  + block_device { 2025-01-16 14:27:22.466484 | orchestrator | 14:27:22.466 STDOUT terraform:  + boot_index = 0 2025-01-16 14:27:22.466519 | orchestrator | 14:27:22.466 STDOUT terraform:  + delete_on_termination = false 2025-01-16 14:27:22.466558 | orchestrator | 14:27:22.466 STDOUT terraform:  + destination_type = "volume" 2025-01-16 14:27:22.466595 | orchestrator | 14:27:22.466 STDOUT terraform:  + multiattach = false 2025-01-16 14:27:22.466635 | orchestrator | 14:27:22.466 STDOUT terraform:  + source_type = "volume" 2025-01-16 14:27:22.466681 | orchestrator | 14:27:22.466 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.466704 | orchestrator | 14:27:22.466 STDOUT terraform:  } 2025-01-16 14:27:22.466729 | orchestrator | 14:27:22.466 STDOUT terraform:  + network { 2025-01-16 14:27:22.466759 | orchestrator | 14:27:22.466 STDOUT terraform:  + access_network = false 2025-01-16 14:27:22.466810 | orchestrator | 14:27:22.466 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-01-16 14:27:22.466849 | orchestrator | 14:27:22.466 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-01-16 14:27:22.466888 | orchestrator | 14:27:22.466 STDOUT terraform:  + mac = (known after apply) 2025-01-16 14:27:22.466928 | orchestrator | 14:27:22.466 STDOUT terraform:  + name = (known after apply) 2025-01-16 14:27:22.466967 | orchestrator | 14:27:22.466 STDOUT terraform:  + port = (known after apply) 2025-01-16 14:27:22.467007 | orchestrator | 14:27:22.466 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.467030 | orchestrator | 14:27:22.467 STDOUT terraform:  } 2025-01-16 14:27:22.467052 | orchestrator | 14:27:22.467 STDOUT terraform:  } 2025-01-16 14:27:22.467102 | orchestrator | 14:27:22.467 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-01-16 14:27:22.467152 | orchestrator | 14:27:22.467 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-01-16 14:27:22.467238 | orchestrator | 14:27:22.467 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-01-16 14:27:22.467290 | orchestrator | 14:27:22.467 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-01-16 14:27:22.467339 | orchestrator | 14:27:22.467 STDOUT terraform:  + all_metadata = (known after apply) 2025-01-16 14:27:22.467383 | orchestrator | 14:27:22.467 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.467419 | orchestrator | 14:27:22.467 STDOUT terraform:  + availability_zone = "nova" 2025-01-16 14:27:22.467463 | orchestrator | 14:27:22.467 STDOUT terraform:  + config_drive = true 2025-01-16 14:27:22.467508 | orchestrator | 14:27:22.467 STDOUT terraform:  + created = (known after apply) 2025-01-16 14:27:22.467551 | orchestrator | 14:27:22.467 STDOUT terraform:  + flavor_id = (known after apply) 2025-01-16 14:27:22.467591 | orchestrator | 14:27:22.467 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-01-16 14:27:22.467625 | orchestrator | 14:27:22.467 STDOUT terraform:  + force_delete = false 2025-01-16 14:27:22.467669 | orchestrator | 14:27:22.467 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.467712 | orchestrator | 14:27:22.467 STDOUT terraform:  + image_id = (known after apply) 2025-01-16 14:27:22.467755 | orchestrator | 14:27:22.467 STDOUT terraform:  + image_name = (known after apply) 2025-01-16 14:27:22.467789 | orchestrator | 14:27:22.467 STDOUT terraform:  + key_pair = "testbed" 2025-01-16 14:27:22.467827 | orchestrator | 14:27:22.467 STDOUT terraform:  + name = "testbed-node-5" 2025-01-16 14:27:22.467861 | orchestrator | 14:27:22.467 STDOUT terraform:  + power_state = "active" 2025-01-16 14:27:22.467905 | orchestrator | 14:27:22.467 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.467948 | orchestrator | 14:27:22.467 STDOUT terraform:  + security_groups = (known after apply) 2025-01-16 14:27:22.467978 | orchestrator | 14:27:22.467 STDOUT terraform:  + stop_before_destroy = false 2025-01-16 14:27:22.468028 | orchestrator | 14:27:22.467 STDOUT terraform:  + updated = (known after apply) 2025-01-16 14:27:22.468085 | orchestrator | 14:27:22.468 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-01-16 14:27:22.468111 | orchestrator | 14:27:22.468 STDOUT terraform:  + block_device { 2025-01-16 14:27:22.468144 | orchestrator | 14:27:22.468 STDOUT terraform:  + boot_index = 0 2025-01-16 14:27:22.468193 | orchestrator | 14:27:22.468 STDOUT terraform:  + delete_on_termination = false 2025-01-16 14:27:22.468232 | orchestrator | 14:27:22.468 STDOUT terraform:  + destination_type = "volume" 2025-01-16 14:27:22.468268 | orchestrator | 14:27:22.468 STDOUT terraform:  + multiattach = false 2025-01-16 14:27:22.468307 | orchestrator | 14:27:22.468 STDOUT terraform:  + source_type = "volume" 2025-01-16 14:27:22.468353 | orchestrator | 14:27:22.468 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.468374 | orchestrator | 14:27:22.468 STDOUT terraform:  } 2025-01-16 14:27:22.468400 | orchestrator | 14:27:22.468 STDOUT terraform:  + network { 2025-01-16 14:27:22.468427 | orchestrator | 14:27:22.468 STDOUT terraform:  + access_network = false 2025-01-16 14:27:22.468464 | orchestrator | 14:27:22.468 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-01-16 14:27:22.468502 | orchestrator | 14:27:22.468 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-01-16 14:27:22.468543 | orchestrator | 14:27:22.468 STDOUT terraform:  + mac = (known after apply) 2025-01-16 14:27:22.468582 | orchestrator | 14:27:22.468 STDOUT terraform:  + name = (known after apply) 2025-01-16 14:27:22.468619 | orchestrator | 14:27:22.468 STDOUT terraform:  + port = (known after apply) 2025-01-16 14:27:22.468673 | orchestrator | 14:27:22.468 STDOUT terraform:  + uuid = (known after apply) 2025-01-16 14:27:22.468708 | orchestrator | 14:27:22.468 STDOUT terraform:  } 2025-01-16 14:27:22.468732 | orchestrator | 14:27:22.468 STDOUT terraform:  } 2025-01-16 14:27:22.468778 | orchestrator | 14:27:22.468 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-01-16 14:27:22.468822 | orchestrator | 14:27:22.468 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-01-16 14:27:22.468859 | orchestrator | 14:27:22.468 STDOUT terraform:  + fingerprint = (known after apply) 2025-01-16 14:27:22.468895 | orchestrator | 14:27:22.468 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.468925 | orchestrator | 14:27:22.468 STDOUT terraform:  + name = "testbed" 2025-01-16 14:27:22.468957 | orchestrator | 14:27:22.468 STDOUT terraform:  + private_key = (sensitive value) 2025-01-16 14:27:22.468991 | orchestrator | 14:27:22.468 STDOUT terraform:  + public_key = (known after apply) 2025-01-16 14:27:22.469027 | orchestrator | 14:27:22.468 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.469065 | orchestrator | 14:27:22.469 STDOUT terraform:  + user_id = (known after apply) 2025-01-16 14:27:22.469087 | orchestrator | 14:27:22.469 STDOUT terraform:  } 2025-01-16 14:27:22.469143 | orchestrator | 14:27:22.469 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-01-16 14:27:22.469221 | orchestrator | 14:27:22.469 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.469259 | orchestrator | 14:27:22.469 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.469296 | orchestrator | 14:27:22.469 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.469336 | orchestrator | 14:27:22.469 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.469373 | orchestrator | 14:27:22.469 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.469409 | orchestrator | 14:27:22.469 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.469431 | orchestrator | 14:27:22.469 STDOUT terraform:  } 2025-01-16 14:27:22.469508 | orchestrator | 14:27:22.469 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-01-16 14:27:22.469578 | orchestrator | 14:27:22.469 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.469617 | orchestrator | 14:27:22.469 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.469653 | orchestrator | 14:27:22.469 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.469700 | orchestrator | 14:27:22.469 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.469741 | orchestrator | 14:27:22.469 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.469780 | orchestrator | 14:27:22.469 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.469802 | orchestrator | 14:27:22.469 STDOUT terraform:  } 2025-01-16 14:27:22.469861 | orchestrator | 14:27:22.469 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-01-16 14:27:22.469916 | orchestrator | 14:27:22.469 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.469955 | orchestrator | 14:27:22.469 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.469991 | orchestrator | 14:27:22.469 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.470057 | orchestrator | 14:27:22.469 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.470097 | orchestrator | 14:27:22.470 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.470135 | orchestrator | 14:27:22.470 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.470157 | orchestrator | 14:27:22.470 STDOUT terraform:  } 2025-01-16 14:27:22.470272 | orchestrator | 14:27:22.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-01-16 14:27:22.470341 | orchestrator | 14:27:22.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.470380 | orchestrator | 14:27:22.470 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.470429 | orchestrator | 14:27:22.470 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.470466 | orchestrator | 14:27:22.470 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.470503 | orchestrator | 14:27:22.470 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.470548 | orchestrator | 14:27:22.470 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.470572 | orchestrator | 14:27:22.470 STDOUT terraform:  } 2025-01-16 14:27:22.470627 | orchestrator | 14:27:22.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-01-16 14:27:22.470680 | orchestrator | 14:27:22.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.470716 | orchestrator | 14:27:22.470 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.470752 | orchestrator | 14:27:22.470 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.470789 | orchestrator | 14:27:22.470 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.470824 | orchestrator | 14:27:22.470 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.470859 | orchestrator | 14:27:22.470 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.470880 | orchestrator | 14:27:22.470 STDOUT terraform:  } 2025-01-16 14:27:22.470941 | orchestrator | 14:27:22.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-01-16 14:27:22.470999 | orchestrator | 14:27:22.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.471036 | orchestrator | 14:27:22.471 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.471072 | orchestrator | 14:27:22.471 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.471107 | orchestrator | 14:27:22.471 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.471145 | orchestrator | 14:27:22.471 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.471196 | orchestrator | 14:27:22.471 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.471222 | orchestrator | 14:27:22.471 STDOUT terraform:  } 2025-01-16 14:27:22.471277 | orchestrator | 14:27:22.471 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-01-16 14:27:22.471336 | orchestrator | 14:27:22.471 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.471371 | orchestrator | 14:27:22.471 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.471410 | orchestrator | 14:27:22.471 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.471446 | orchestrator | 14:27:22.471 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.471487 | orchestrator | 14:27:22.471 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.471523 | orchestrator | 14:27:22.471 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.471547 | orchestrator | 14:27:22.471 STDOUT terraform:  } 2025-01-16 14:27:22.471603 | orchestrator | 14:27:22.471 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-01-16 14:27:22.471658 | orchestrator | 14:27:22.471 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.471693 | orchestrator | 14:27:22.471 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.471733 | orchestrator | 14:27:22.471 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.471769 | orchestrator | 14:27:22.471 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.471811 | orchestrator | 14:27:22.471 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.471847 | orchestrator | 14:27:22.471 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.471869 | orchestrator | 14:27:22.471 STDOUT terraform:  } 2025-01-16 14:27:22.471926 | orchestrator | 14:27:22.471 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-01-16 14:27:22.471982 | orchestrator | 14:27:22.471 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.472019 | orchestrator | 14:27:22.471 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.472055 | orchestrator | 14:27:22.472 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.472091 | orchestrator | 14:27:22.472 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.472128 | orchestrator | 14:27:22.472 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.472215 | orchestrator | 14:27:22.472 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.472243 | orchestrator | 14:27:22.472 STDOUT terraform:  } 2025-01-16 14:27:22.472304 | orchestrator | 14:27:22.472 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-01-16 14:27:22.472374 | orchestrator | 14:27:22.472 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.472414 | orchestrator | 14:27:22.472 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.472451 | orchestrator | 14:27:22.472 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.472487 | orchestrator | 14:27:22.472 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.472522 | orchestrator | 14:27:22.472 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.472559 | orchestrator | 14:27:22.472 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.472580 | orchestrator | 14:27:22.472 STDOUT terraform:  } 2025-01-16 14:27:22.472638 | orchestrator | 14:27:22.472 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-01-16 14:27:22.472692 | orchestrator | 14:27:22.472 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.472729 | orchestrator | 14:27:22.472 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.472766 | orchestrator | 14:27:22.472 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.472801 | orchestrator | 14:27:22.472 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.472837 | orchestrator | 14:27:22.472 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.472872 | orchestrator | 14:27:22.472 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.472893 | orchestrator | 14:27:22.472 STDOUT terraform:  } 2025-01-16 14:27:22.472949 | orchestrator | 14:27:22.472 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-01-16 14:27:22.473009 | orchestrator | 14:27:22.472 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.473044 | orchestrator | 14:27:22.473 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.473080 | orchestrator | 14:27:22.473 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.473117 | orchestrator | 14:27:22.473 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.473154 | orchestrator | 14:27:22.473 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.473230 | orchestrator | 14:27:22.473 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.473256 | orchestrator | 14:27:22.473 STDOUT terraform:  } 2025-01-16 14:27:22.473319 | orchestrator | 14:27:22.473 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-01-16 14:27:22.473376 | orchestrator | 14:27:22.473 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.473415 | orchestrator | 14:27:22.473 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.473452 | orchestrator | 14:27:22.473 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.473489 | orchestrator | 14:27:22.473 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.473527 | orchestrator | 14:27:22.473 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.473563 | orchestrator | 14:27:22.473 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.473587 | orchestrator | 14:27:22.473 STDOUT terraform:  } 2025-01-16 14:27:22.473643 | orchestrator | 14:27:22.473 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-01-16 14:27:22.473699 | orchestrator | 14:27:22.473 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.473736 | orchestrator | 14:27:22.473 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.473783 | orchestrator | 14:27:22.473 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.473821 | orchestrator | 14:27:22.473 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.473857 | orchestrator | 14:27:22.473 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.473892 | orchestrator | 14:27:22.473 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.473914 | orchestrator | 14:27:22.473 STDOUT terraform:  } 2025-01-16 14:27:22.473970 | orchestrator | 14:27:22.473 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-01-16 14:27:22.474045 | orchestrator | 14:27:22.473 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.474085 | orchestrator | 14:27:22.474 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.474122 | orchestrator | 14:27:22.474 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.474158 | orchestrator | 14:27:22.474 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.474207 | orchestrator | 14:27:22.474 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.474250 | orchestrator | 14:27:22.474 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.474277 | orchestrator | 14:27:22.474 STDOUT terraform:  } 2025-01-16 14:27:22.474338 | orchestrator | 14:27:22.474 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-01-16 14:27:22.474394 | orchestrator | 14:27:22.474 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.474432 | orchestrator | 14:27:22.474 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.474472 | orchestrator | 14:27:22.474 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.474509 | orchestrator | 14:27:22.474 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.474546 | orchestrator | 14:27:22.474 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.474584 | orchestrator | 14:27:22.474 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.474606 | orchestrator | 14:27:22.474 STDOUT terraform:  } 2025-01-16 14:27:22.474663 | orchestrator | 14:27:22.474 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-01-16 14:27:22.474720 | orchestrator | 14:27:22.474 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.474757 | orchestrator | 14:27:22.474 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.474794 | orchestrator | 14:27:22.474 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.474831 | orchestrator | 14:27:22.474 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.474868 | orchestrator | 14:27:22.474 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.474904 | orchestrator | 14:27:22.474 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.474929 | orchestrator | 14:27:22.474 STDOUT terraform:  } 2025-01-16 14:27:22.474987 | orchestrator | 14:27:22.474 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-01-16 14:27:22.475043 | orchestrator | 14:27:22.474 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-01-16 14:27:22.475080 | orchestrator | 14:27:22.475 STDOUT terraform:  + device = (known after apply) 2025-01-16 14:27:22.475116 | orchestrator | 14:27:22.475 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.475152 | orchestrator | 14:27:22.475 STDOUT terraform:  + instance_id = (known after apply) 2025-01-16 14:27:22.475211 | orchestrator | 14:27:22.475 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.475257 | orchestrator | 14:27:22.475 STDOUT terraform:  + volume_id = (known after apply) 2025-01-16 14:27:22.475281 | orchestrator | 14:27:22.475 STDOUT terraform:  } 2025-01-16 14:27:22.475346 | orchestrator | 14:27:22.475 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-01-16 14:27:22.475412 | orchestrator | 14:27:22.475 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-01-16 14:27:22.475451 | orchestrator | 14:27:22.475 STDOUT terraform:  + fixed_ip = (known after apply) 2025-01-16 14:27:22.475495 | orchestrator | 14:27:22.475 STDOUT terraform:  + floating_ip = (known after apply) 2025-01-16 14:27:22.475535 | orchestrator | 14:27:22.475 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.475575 | orchestrator | 14:27:22.475 STDOUT terraform:  + port_id = (known after apply) 2025-01-16 14:27:22.475618 | orchestrator | 14:27:22.475 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.475643 | orchestrator | 14:27:22.475 STDOUT terraform:  } 2025-01-16 14:27:22.475698 | orchestrator | 14:27:22.475 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-01-16 14:27:22.475755 | orchestrator | 14:27:22.475 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-01-16 14:27:22.475791 | orchestrator | 14:27:22.475 STDOUT terraform:  + address = (known after apply) 2025-01-16 14:27:22.475827 | orchestrator | 14:27:22.475 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.475863 | orchestrator | 14:27:22.475 STDOUT terraform:  + dns_domain = (known after apply) 2025-01-16 14:27:22.475898 | orchestrator | 14:27:22.475 STDOUT terraform:  + dns_name = (known after apply) 2025-01-16 14:27:22.475935 | orchestrator | 14:27:22.475 STDOUT terraform:  + fixed_ip = (known after apply) 2025-01-16 14:27:22.475970 | orchestrator | 14:27:22.475 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.476001 | orchestrator | 14:27:22.475 STDOUT terraform:  + pool = "public" 2025-01-16 14:27:22.476036 | orchestrator | 14:27:22.476 STDOUT terraform:  + port_id = (known after apply) 2025-01-16 14:27:22.476072 | orchestrator | 14:27:22.476 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.476107 | orchestrator | 14:27:22.476 STDOUT terraform:  + subnet_id = (known after apply) 2025-01-16 14:27:22.476141 | orchestrator | 14:27:22.476 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.476165 | orchestrator | 14:27:22.476 STDOUT terraform:  } 2025-01-16 14:27:22.476233 | orchestrator | 14:27:22.476 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-01-16 14:27:22.476286 | orchestrator | 14:27:22.476 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-01-16 14:27:22.476331 | orchestrator | 14:27:22.476 STDOUT terraform:  + admin_state_up = (known after apply) 2025-01-16 14:27:22.476378 | orchestrator | 14:27:22.476 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.476411 | orchestrator | 14:27:22.476 STDOUT terraform:  + availability_zone_hints = [ 2025-01-16 14:27:22.476436 | orchestrator | 14:27:22.476 STDOUT terraform:  + "nova", 2025-01-16 14:27:22.476460 | orchestrator | 14:27:22.476 STDOUT terraform:  ] 2025-01-16 14:27:22.476506 | orchestrator | 14:27:22.476 STDOUT terraform:  + dns_domain = (known after apply) 2025-01-16 14:27:22.476551 | orchestrator | 14:27:22.476 STDOUT terraform:  + external = (known after apply) 2025-01-16 14:27:22.476597 | orchestrator | 14:27:22.476 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.476644 | orchestrator | 14:27:22.476 STDOUT terraform:  + mtu = (known after apply) 2025-01-16 14:27:22.476698 | orchestrator | 14:27:22.476 STDOUT terraform:  + name = "net-testbed-management" 2025-01-16 14:27:22.476744 | orchestrator | 14:27:22.476 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-01-16 14:27:22.476790 | orchestrator | 14:27:22.476 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-01-16 14:27:22.476836 | orchestrator | 14:27:22.476 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.476882 | orchestrator | 14:27:22.476 STDOUT terraform:  + shared = (known after apply) 2025-01-16 14:27:22.476928 | orchestrator | 14:27:22.476 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.476973 | orchestrator | 14:27:22.476 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-01-16 14:27:22.477007 | orchestrator | 14:27:22.476 STDOUT terraform:  + segments (known after apply) 2025-01-16 14:27:22.477030 | orchestrator | 14:27:22.477 STDOUT terraform:  } 2025-01-16 14:27:22.477084 | orchestrator | 14:27:22.477 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-01-16 14:27:22.477141 | orchestrator | 14:27:22.477 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-01-16 14:27:22.477222 | orchestrator | 14:27:22.477 STDOUT terraform:  + admin_state_up = (known after apply) 2025-01-16 14:27:22.477269 | orchestrator | 14:27:22.477 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-01-16 14:27:22.477312 | orchestrator | 14:27:22.477 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-01-16 14:27:22.477359 | orchestrator | 14:27:22.477 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.477403 | orchestrator | 14:27:22.477 STDOUT terraform:  + device_id = (known after apply) 2025-01-16 14:27:22.477448 | orchestrator | 14:27:22.477 STDOUT terraform:  + device_owner = (known after apply) 2025-01-16 14:27:22.477492 | orchestrator | 14:27:22.477 STDOUT terraform:  + dns_assignment = (known after apply) 2025-01-16 14:27:22.477536 | orchestrator | 14:27:22.477 STDOUT terraform:  + dns_name = (known after apply) 2025-01-16 14:27:22.477580 | orchestrator | 14:27:22.477 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.477625 | orchestrator | 14:27:22.477 STDOUT terraform:  + mac_address = (known after apply) 2025-01-16 14:27:22.477678 | orchestrator | 14:27:22.477 STDOUT terraform:  + network_id = (known after apply) 2025-01-16 14:27:22.477722 | orchestrator | 14:27:22.477 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-01-16 14:27:22.477765 | orchestrator | 14:27:22.477 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-01-16 14:27:22.477810 | orchestrator | 14:27:22.477 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.477854 | orchestrator | 14:27:22.477 STDOUT terraform:  + security_group_ids = (known after apply) 2025-01-16 14:27:22.477898 | orchestrator | 14:27:22.477 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.477930 | orchestrator | 14:27:22.477 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.477973 | orchestrator | 14:27:22.477 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-01-16 14:27:22.477997 | orchestrator | 14:27:22.477 STDOUT terraform:  } 2025-01-16 14:27:22.478040 | orchestrator | 14:27:22.478 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.478079 | orchestrator | 14:27:22.478 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-01-16 14:27:22.478103 | orchestrator | 14:27:22.478 STDOUT terraform:  } 2025-01-16 14:27:22.478135 | orchestrator | 14:27:22.478 STDOUT terraform:  + binding (known after apply) 2025-01-16 14:27:22.478158 | orchestrator | 14:27:22.478 STDOUT terraform:  + fixed_ip { 2025-01-16 14:27:22.478205 | orchestrator | 14:27:22.478 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-01-16 14:27:22.478244 | orchestrator | 14:27:22.478 STDOUT terraform:  + subnet_id = (known after apply) 2025-01-16 14:27:22.478268 | orchestrator | 14:27:22.478 STDOUT terraform:  } 2025-01-16 14:27:22.478290 | orchestrator | 14:27:22.478 STDOUT terraform:  } 2025-01-16 14:27:22.478344 | orchestrator | 14:27:22.478 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-01-16 14:27:22.478397 | orchestrator | 14:27:22.478 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-01-16 14:27:22.478442 | orchestrator | 14:27:22.478 STDOUT terraform:  + admin_state_up = (known after apply) 2025-01-16 14:27:22.478487 | orchestrator | 14:27:22.478 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-01-16 14:27:22.478530 | orchestrator | 14:27:22.478 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-01-16 14:27:22.478575 | orchestrator | 14:27:22.478 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.478619 | orchestrator | 14:27:22.478 STDOUT terraform:  + device_id = (known after apply) 2025-01-16 14:27:22.478662 | orchestrator | 14:27:22.478 STDOUT terraform:  + device_owner = (known after apply) 2025-01-16 14:27:22.478707 | orchestrator | 14:27:22.478 STDOUT terraform:  + dns_assignment = (known after apply) 2025-01-16 14:27:22.478752 | orchestrator | 14:27:22.478 STDOUT terraform:  + dns_name = (known after apply) 2025-01-16 14:27:22.478797 | orchestrator | 14:27:22.478 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.478840 | orchestrator | 14:27:22.478 STDOUT terraform:  + mac_address = (known after apply) 2025-01-16 14:27:22.478885 | orchestrator | 14:27:22.478 STDOUT terraform:  + network_id = (known after apply) 2025-01-16 14:27:22.478929 | orchestrator | 14:27:22.478 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-01-16 14:27:22.478973 | orchestrator | 14:27:22.478 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-01-16 14:27:22.479017 | orchestrator | 14:27:22.478 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.479063 | orchestrator | 14:27:22.479 STDOUT terraform:  + security_group_ids = (known after apply) 2025-01-16 14:27:22.479108 | orchestrator | 14:27:22.479 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.479137 | orchestrator | 14:27:22.479 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.479193 | orchestrator | 14:27:22.479 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-01-16 14:27:22.479217 | orchestrator | 14:27:22.479 STDOUT terraform:  } 2025-01-16 14:27:22.479246 | orchestrator | 14:27:22.479 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.479283 | orchestrator | 14:27:22.479 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-01-16 14:27:22.479304 | orchestrator | 14:27:22.479 STDOUT terraform:  } 2025-01-16 14:27:22.479336 | orchestrator | 14:27:22.479 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.479372 | orchestrator | 14:27:22.479 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-01-16 14:27:22.479394 | orchestrator | 14:27:22.479 STDOUT terraform:  } 2025-01-16 14:27:22.479421 | orchestrator | 14:27:22.479 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.479457 | orchestrator | 14:27:22.479 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-01-16 14:27:22.479478 | orchestrator | 14:27:22.479 STDOUT terraform:  } 2025-01-16 14:27:22.479513 | orchestrator | 14:27:22.479 STDOUT terraform:  + binding (known after apply) 2025-01-16 14:27:22.479535 | orchestrator | 14:27:22.479 STDOUT terraform:  + fixed_ip { 2025-01-16 14:27:22.479567 | orchestrator | 14:27:22.479 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-01-16 14:27:22.479603 | orchestrator | 14:27:22.479 STDOUT terraform:  + subnet_id = (known after apply) 2025-01-16 14:27:22.479626 | orchestrator | 14:27:22.479 STDOUT terraform:  } 2025-01-16 14:27:22.479649 | orchestrator | 14:27:22.479 STDOUT terraform:  } 2025-01-16 14:27:22.479703 | orchestrator | 14:27:22.479 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-01-16 14:27:22.479755 | orchestrator | 14:27:22.479 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-01-16 14:27:22.479798 | orchestrator | 14:27:22.479 STDOUT terraform:  + admin_state_up = (known after apply) 2025-01-16 14:27:22.479843 | orchestrator | 14:27:22.479 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-01-16 14:27:22.479886 | orchestrator | 14:27:22.479 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-01-16 14:27:22.479935 | orchestrator | 14:27:22.479 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.479980 | orchestrator | 14:27:22.479 STDOUT terraform:  + device_id = (known after apply) 2025-01-16 14:27:22.480025 | orchestrator | 14:27:22.479 STDOUT terraform:  + device_owner = (known after apply) 2025-01-16 14:27:22.480069 | orchestrator | 14:27:22.480 STDOUT terraform:  + dns_assignment = (known after apply) 2025-01-16 14:27:22.480117 | orchestrator | 14:27:22.480 STDOUT terraform:  + dns_name = (known after apply) 2025-01-16 14:27:22.480161 | orchestrator | 14:27:22.480 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.480216 | orchestrator | 14:27:22.480 STDOUT terraform:  + mac_address = (known after apply) 2025-01-16 14:27:22.480260 | orchestrator | 14:27:22.480 STDOUT terraform:  + network_id = (known after apply) 2025-01-16 14:27:22.480303 | orchestrator | 14:27:22.480 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-01-16 14:27:22.480353 | orchestrator | 14:27:22.480 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-01-16 14:27:22.480397 | orchestrator | 14:27:22.480 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.480440 | orchestrator | 14:27:22.480 STDOUT terraform:  + security_group_ids = (known after apply) 2025-01-16 14:27:22.480484 | orchestrator | 14:27:22.480 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.480512 | orchestrator | 14:27:22.480 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.480550 | orchestrator | 14:27:22.480 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-01-16 14:27:22.480573 | orchestrator | 14:27:22.480 STDOUT terraform:  } 2025-01-16 14:27:22.480601 | orchestrator | 14:27:22.480 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.480638 | orchestrator | 14:27:22.480 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-01-16 14:27:22.480661 | orchestrator | 14:27:22.480 STDOUT terraform:  } 2025-01-16 14:27:22.480689 | orchestrator | 14:27:22.480 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.480725 | orchestrator | 14:27:22.480 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-01-16 14:27:22.480748 | orchestrator | 14:27:22.480 STDOUT terraform:  } 2025-01-16 14:27:22.480775 | orchestrator | 14:27:22.480 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.480812 | orchestrator | 14:27:22.480 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-01-16 14:27:22.480834 | orchestrator | 14:27:22.480 STDOUT terraform:  } 2025-01-16 14:27:22.480868 | orchestrator | 14:27:22.480 STDOUT terraform:  + binding (known after apply) 2025-01-16 14:27:22.480891 | orchestrator | 14:27:22.480 STDOUT terraform:  + fixed_ip { 2025-01-16 14:27:22.480924 | orchestrator | 14:27:22.480 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-01-16 14:27:22.480962 | orchestrator | 14:27:22.480 STDOUT terraform:  + subnet_id = (known after apply) 2025-01-16 14:27:22.480984 | orchestrator | 14:27:22.480 STDOUT terraform:  } 2025-01-16 14:27:22.481006 | orchestrator | 14:27:22.480 STDOUT terraform:  } 2025-01-16 14:27:22.481060 | orchestrator | 14:27:22.481 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-01-16 14:27:22.481115 | orchestrator | 14:27:22.481 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-01-16 14:27:22.481158 | orchestrator | 14:27:22.481 STDOUT terraform:  + admin_state_up = (known after apply) 2025-01-16 14:27:22.481234 | orchestrator | 14:27:22.481 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-01-16 14:27:22.481281 | orchestrator | 14:27:22.481 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-01-16 14:27:22.481325 | orchestrator | 14:27:22.481 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.481369 | orchestrator | 14:27:22.481 STDOUT terraform:  + device_id = (known after apply) 2025-01-16 14:27:22.481414 | orchestrator | 14:27:22.481 STDOUT terraform:  + device_owner = (known after apply) 2025-01-16 14:27:22.481457 | orchestrator | 14:27:22.481 STDOUT terraform:  + dns_assignment = (known after apply) 2025-01-16 14:27:22.481508 | orchestrator | 14:27:22.481 STDOUT terraform:  + dns_name = (known after apply) 2025-01-16 14:27:22.481554 | orchestrator | 14:27:22.481 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.481598 | orchestrator | 14:27:22.481 STDOUT terraform:  + mac_address = (known after apply) 2025-01-16 14:27:22.481642 | orchestrator | 14:27:22.481 STDOUT terraform:  + network_id = (known after apply) 2025-01-16 14:27:22.481687 | orchestrator | 14:27:22.481 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-01-16 14:27:22.481733 | orchestrator | 14:27:22.481 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-01-16 14:27:22.481787 | orchestrator | 14:27:22.481 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.481831 | orchestrator | 14:27:22.481 STDOUT terraform:  + security_group_ids = (known after apply) 2025-01-16 14:27:22.481875 | orchestrator | 14:27:22.481 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.481904 | orchestrator | 14:27:22.481 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.481941 | orchestrator | 14:27:22.481 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-01-16 14:27:22.481964 | orchestrator | 14:27:22.481 STDOUT terraform:  } 2025-01-16 14:27:22.481994 | orchestrator | 14:27:22.481 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.482153 | orchestrator | 14:27:22.482 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-01-16 14:27:22.482216 | orchestrator | 14:27:22.482 STDOUT terraform:  } 2025-01-16 14:27:22.482252 | orchestrator | 14:27:22.482 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.482295 | orchestrator | 14:27:22.482 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-01-16 14:27:22.482318 | orchestrator | 14:27:22.482 STDOUT terraform:  } 2025-01-16 14:27:22.482350 | orchestrator | 14:27:22.482 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.482387 | orchestrator | 14:27:22.482 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-01-16 14:27:22.482410 | orchestrator | 14:27:22.482 STDOUT terraform:  } 2025-01-16 14:27:22.482442 | orchestrator | 14:27:22.482 STDOUT terraform:  + binding (known after apply) 2025-01-16 14:27:22.482465 | orchestrator | 14:27:22.482 STDOUT terraform:  + fixed_ip { 2025-01-16 14:27:22.482502 | orchestrator | 14:27:22.482 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-01-16 14:27:22.482540 | orchestrator | 14:27:22.482 STDOUT terraform:  + subnet_id = (known after apply) 2025-01-16 14:27:22.482562 | orchestrator | 14:27:22.482 STDOUT terraform:  } 2025-01-16 14:27:22.482585 | orchestrator | 14:27:22.482 STDOUT terraform:  } 2025-01-16 14:27:22.482643 | orchestrator | 14:27:22.482 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-01-16 14:27:22.482697 | orchestrator | 14:27:22.482 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-01-16 14:27:22.482743 | orchestrator | 14:27:22.482 STDOUT terraform:  + admin_state_up = (known after apply) 2025-01-16 14:27:22.482786 | orchestrator | 14:27:22.482 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-01-16 14:27:22.482837 | orchestrator | 14:27:22.482 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-01-16 14:27:22.482891 | orchestrator | 14:27:22.482 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.482950 | orchestrator | 14:27:22.482 STDOUT terraform:  + device_id = (known after apply) 2025-01-16 14:27:22.482997 | orchestrator | 14:27:22.482 STDOUT terraform:  + device_owner = (known after apply) 2025-01-16 14:27:22.483040 | orchestrator | 14:27:22.483 STDOUT terraform:  + dns_assignment = (known after apply) 2025-01-16 14:27:22.483084 | orchestrator | 14:27:22.483 STDOUT terraform:  + dns_name = (known after apply) 2025-01-16 14:27:22.483129 | orchestrator | 14:27:22.483 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.483175 | orchestrator | 14:27:22.483 STDOUT terraform:  + mac_address = (known after apply) 2025-01-16 14:27:22.483254 | orchestrator | 14:27:22.483 STDOUT terraform:  + network_id = (known after apply) 2025-01-16 14:27:22.483298 | orchestrator | 14:27:22.483 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-01-16 14:27:22.483341 | orchestrator | 14:27:22.483 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-01-16 14:27:22.483386 | orchestrator | 14:27:22.483 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.483429 | orchestrator | 14:27:22.483 STDOUT terraform:  + security_group_ids = (known after apply) 2025-01-16 14:27:22.483473 | orchestrator | 14:27:22.483 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.483501 | orchestrator | 14:27:22.483 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.483523 | orchestrator | 14:27:22.483 STDOUT terraform:  2025-01-16 14:27:22.483604 | orchestrator | 14:27:22.483 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-01-16 14:27:22.483629 | orchestrator | 14:27:22.483 STDOUT terraform:  } 2025-01-16 14:27:22.483658 | orchestrator | 14:27:22.483 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.483695 | orchestrator | 14:27:22.483 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-01-16 14:27:22.483717 | orchestrator | 14:27:22.483 STDOUT terraform:  } 2025-01-16 14:27:22.483752 | orchestrator | 14:27:22.483 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.483788 | orchestrator | 14:27:22.483 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-01-16 14:27:22.483810 | orchestrator | 14:27:22.483 STDOUT terraform:  } 2025-01-16 14:27:22.483837 | orchestrator | 14:27:22.483 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.483871 | orchestrator | 14:27:22.483 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-01-16 14:27:22.483893 | orchestrator | 14:27:22.483 STDOUT terraform:  } 2025-01-16 14:27:22.483924 | orchestrator | 14:27:22.483 STDOUT terraform:  + binding (known after apply) 2025-01-16 14:27:22.483947 | orchestrator | 14:27:22.483 STDOUT terraform:  + fixed_ip { 2025-01-16 14:27:22.483979 | orchestrator | 14:27:22.483 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-01-16 14:27:22.484017 | orchestrator | 14:27:22.483 STDOUT terraform:  + subnet_id = (known after apply) 2025-01-16 14:27:22.484047 | orchestrator | 14:27:22.484 STDOUT terraform:  } 2025-01-16 14:27:22.484071 | orchestrator | 14:27:22.484 STDOUT terraform:  } 2025-01-16 14:27:22.484126 | orchestrator | 14:27:22.484 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-01-16 14:27:22.484177 | orchestrator | 14:27:22.484 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-01-16 14:27:22.484239 | orchestrator | 14:27:22.484 STDOUT terraform:  + admin_state_up = (known after apply) 2025-01-16 14:27:22.484283 | orchestrator | 14:27:22.484 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-01-16 14:27:22.484328 | orchestrator | 14:27:22.484 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-01-16 14:27:22.484372 | orchestrator | 14:27:22.484 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.484416 | orchestrator | 14:27:22.484 STDOUT terraform:  + device_id = (known after apply) 2025-01-16 14:27:22.484459 | orchestrator | 14:27:22.484 STDOUT terraform:  + device_owner = (known after apply) 2025-01-16 14:27:22.484501 | orchestrator | 14:27:22.484 STDOUT terraform:  + dns_assignment = (known after apply) 2025-01-16 14:27:22.484547 | orchestrator | 14:27:22.484 STDOUT terraform:  + dns_name = (known after apply) 2025-01-16 14:27:22.484590 | orchestrator | 14:27:22.484 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.484641 | orchestrator | 14:27:22.484 STDOUT terraform:  + mac_address = (known after apply) 2025-01-16 14:27:22.484685 | orchestrator | 14:27:22.484 STDOUT terraform:  + network_id = (known after apply) 2025-01-16 14:27:22.484727 | orchestrator | 14:27:22.484 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-01-16 14:27:22.484769 | orchestrator | 14:27:22.484 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-01-16 14:27:22.484815 | orchestrator | 14:27:22.484 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.484858 | orchestrator | 14:27:22.484 STDOUT terraform:  + security_group_ids = (known after apply) 2025-01-16 14:27:22.484902 | orchestrator | 14:27:22.484 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.484928 | orchestrator | 14:27:22.484 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.484966 | orchestrator | 14:27:22.484 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-01-16 14:27:22.484988 | orchestrator | 14:27:22.484 STDOUT terraform:  } 2025-01-16 14:27:22.485014 | orchestrator | 14:27:22.484 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.485050 | orchestrator | 14:27:22.485 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-01-16 14:27:22.485071 | orchestrator | 14:27:22.485 STDOUT terraform:  } 2025-01-16 14:27:22.485102 | orchestrator | 14:27:22.485 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.485138 | orchestrator | 14:27:22.485 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-01-16 14:27:22.485161 | orchestrator | 14:27:22.485 STDOUT terraform:  } 2025-01-16 14:27:22.485205 | orchestrator | 14:27:22.485 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.485249 | orchestrator | 14:27:22.485 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-01-16 14:27:22.485285 | orchestrator | 14:27:22.485 STDOUT terraform:  } 2025-01-16 14:27:22.485331 | orchestrator | 14:27:22.485 STDOUT terraform:  + binding (known after apply) 2025-01-16 14:27:22.485354 | orchestrator | 14:27:22.485 STDOUT terraform:  + fixed_ip { 2025-01-16 14:27:22.485447 | orchestrator | 14:27:22.485 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-01-16 14:27:22.485505 | orchestrator | 14:27:22.485 STDOUT terraform:  + subnet_id = (known after apply) 2025-01-16 14:27:22.485528 | orchestrator | 14:27:22.485 STDOUT terraform:  } 2025-01-16 14:27:22.485566 | orchestrator | 14:27:22.485 STDOUT terraform:  } 2025-01-16 14:27:22.485637 | orchestrator | 14:27:22.485 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-01-16 14:27:22.485700 | orchestrator | 14:27:22.485 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-01-16 14:27:22.485751 | orchestrator | 14:27:22.485 STDOUT terraform:  + admin_state_up = (known after apply) 2025-01-16 14:27:22.485810 | orchestrator | 14:27:22.485 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-01-16 14:27:22.485863 | orchestrator | 14:27:22.485 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-01-16 14:27:22.485915 | orchestrator | 14:27:22.485 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.485971 | orchestrator | 14:27:22.485 STDOUT terraform:  + device_id = (known after apply) 2025-01-16 14:27:22.486030 | orchestrator | 14:27:22.485 STDOUT terraform:  + device_owner = (known after apply) 2025-01-16 14:27:22.486085 | orchestrator | 14:27:22.486 STDOUT terraform:  + dns_assignment = (known after apply) 2025-01-16 14:27:22.486146 | orchestrator | 14:27:22.486 STDOUT terraform:  + dns_name = (known after apply) 2025-01-16 14:27:22.486241 | orchestrator | 14:27:22.486 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.486302 | orchestrator | 14:27:22.486 STDOUT terraform:  + mac_address = (known after apply) 2025-01-16 14:27:22.486355 | orchestrator | 14:27:22.486 STDOUT terraform:  + network_id = (known after apply) 2025-01-16 14:27:22.486414 | orchestrator | 14:27:22.486 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-01-16 14:27:22.486477 | orchestrator | 14:27:22.486 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-01-16 14:27:22.486525 | orchestrator | 14:27:22.486 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.486586 | orchestrator | 14:27:22.486 STDOUT terraform:  + security_group_ids = (known after apply) 2025-01-16 14:27:22.486647 | orchestrator | 14:27:22.486 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.486678 | orchestrator | 14:27:22.486 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.486734 | orchestrator | 14:27:22.486 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-01-16 14:27:22.486760 | orchestrator | 14:27:22.486 STDOUT terraform:  } 2025-01-16 14:27:22.486805 | orchestrator | 14:27:22.486 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.486894 | orchestrator | 14:27:22.486 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-01-16 14:27:22.486918 | orchestrator | 14:27:22.486 STDOUT terraform:  } 2025-01-16 14:27:22.486960 | orchestrator | 14:27:22.486 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.487000 | orchestrator | 14:27:22.486 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-01-16 14:27:22.487037 | orchestrator | 14:27:22.487 STDOUT terraform:  } 2025-01-16 14:27:22.487068 | orchestrator | 14:27:22.487 STDOUT terraform:  + allowed_address_pairs { 2025-01-16 14:27:22.487123 | orchestrator | 14:27:22.487 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-01-16 14:27:22.487148 | orchestrator | 14:27:22.487 STDOUT terraform:  } 2025-01-16 14:27:22.487210 | orchestrator | 14:27:22.487 STDOUT terraform:  + binding (known after apply) 2025-01-16 14:27:22.487236 | orchestrator | 14:27:22.487 STDOUT terraform:  + fixed_ip { 2025-01-16 14:27:22.487287 | orchestrator | 14:27:22.487 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-01-16 14:27:22.487338 | orchestrator | 14:27:22.487 STDOUT terraform:  + subnet_id = (known after apply) 2025-01-16 14:27:22.487368 | orchestrator | 14:27:22.487 STDOUT terraform:  } 2025-01-16 14:27:22.487392 | orchestrator | 14:27:22.487 STDOUT terraform:  } 2025-01-16 14:27:22.487464 | orchestrator | 14:27:22.487 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-01-16 14:27:22.487539 | orchestrator | 14:27:22.487 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-01-16 14:27:22.487569 | orchestrator | 14:27:22.487 STDOUT terraform:  + force_destroy = false 2025-01-16 14:27:22.487621 | orchestrator | 14:27:22.487 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.487675 | orchestrator | 14:27:22.487 STDOUT terraform:  + port_id = (known after apply) 2025-01-16 14:27:22.487714 | orchestrator | 14:27:22.487 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.487768 | orchestrator | 14:27:22.487 STDOUT terraform:  + router_id = (known after apply) 2025-01-16 14:27:22.487815 | orchestrator | 14:27:22.487 STDOUT terraform:  + subnet_id = (known after apply) 2025-01-16 14:27:22.487845 | orchestrator | 14:27:22.487 STDOUT terraform:  } 2025-01-16 14:27:22.487890 | orchestrator | 14:27:22.487 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-01-16 14:27:22.487947 | orchestrator | 14:27:22.487 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-01-16 14:27:22.488008 | orchestrator | 14:27:22.487 STDOUT terraform:  + admin_state_up = (known after apply) 2025-01-16 14:27:22.488061 | orchestrator | 14:27:22.488 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.488101 | orchestrator | 14:27:22.488 STDOUT terraform:  + availability_zone_hints = [ 2025-01-16 14:27:22.488125 | orchestrator | 14:27:22.488 STDOUT terraform:  + "nova", 2025-01-16 14:27:22.488163 | orchestrator | 14:27:22.488 STDOUT terraform:  ] 2025-01-16 14:27:22.488254 | orchestrator | 14:27:22.488 STDOUT terraform:  + distributed = (known after apply) 2025-01-16 14:27:22.488326 | orchestrator | 14:27:22.488 STDOUT terraform:  + enable_snat = (known after apply) 2025-01-16 14:27:22.488401 | orchestrator | 14:27:22.488 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-01-16 14:27:22.488447 | orchestrator | 14:27:22.488 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.488501 | orchestrator | 14:27:22.488 STDOUT terraform:  + name = "testbed" 2025-01-16 14:27:22.488560 | orchestrator | 14:27:22.488 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.488606 | orchestrator | 14:27:22.488 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.488661 | orchestrator | 14:27:22.488 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-01-16 14:27:22.488683 | orchestrator | 14:27:22.488 STDOUT terraform:  } 2025-01-16 14:27:22.488759 | orchestrator | 14:27:22.488 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-01-16 14:27:22.488837 | orchestrator | 14:27:22.488 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-01-16 14:27:22.488883 | orchestrator | 14:27:22.488 STDOUT terraform:  + description = "ssh" 2025-01-16 14:27:22.488916 | orchestrator | 14:27:22.488 STDOUT terraform:  + direction = "ingress" 2025-01-16 14:27:22.488960 | orchestrator | 14:27:22.488 STDOUT terraform:  + ethertype = "IPv4" 2025-01-16 14:27:22.489010 | orchestrator | 14:27:22.488 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.489047 | orchestrator | 14:27:22.489 STDOUT terraform:  + port_range_max = 22 2025-01-16 14:27:22.489077 | orchestrator | 14:27:22.489 STDOUT terraform:  + port_range_min = 22 2025-01-16 14:27:22.489124 | orchestrator | 14:27:22.489 STDOUT terraform:  + protocol = "tcp" 2025-01-16 14:27:22.489175 | orchestrator | 14:27:22.489 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.489235 | orchestrator | 14:27:22.489 STDOUT terraform:  + remote_group_id = (known after apply) 2025-01-16 14:27:22.489277 | orchestrator | 14:27:22.489 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-01-16 14:27:22.489322 | orchestrator | 14:27:22.489 STDOUT terraform:  + security_group_id = (known after apply) 2025-01-16 14:27:22.489377 | orchestrator | 14:27:22.489 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.489401 | orchestrator | 14:27:22.489 STDOUT terraform:  } 2025-01-16 14:27:22.489475 | orchestrator | 14:27:22.489 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-01-16 14:27:22.489552 | orchestrator | 14:27:22.489 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-01-16 14:27:22.489586 | orchestrator | 14:27:22.489 STDOUT terraform:  + description = "wireguard" 2025-01-16 14:27:22.489619 | orchestrator | 14:27:22.489 STDOUT terraform:  + direction = "ingress" 2025-01-16 14:27:22.489649 | orchestrator | 14:27:22.489 STDOUT terraform:  + ethertype = "IPv4" 2025-01-16 14:27:22.489705 | orchestrator | 14:27:22.489 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.489741 | orchestrator | 14:27:22.489 STDOUT terraform:  + port_range_max = 51820 2025-01-16 14:27:22.489771 | orchestrator | 14:27:22.489 STDOUT terraform:  + port_range_min = 51820 2025-01-16 14:27:22.489802 | orchestrator | 14:27:22.489 STDOUT terraform:  + protocol = "udp" 2025-01-16 14:27:22.489843 | orchestrator | 14:27:22.489 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.489891 | orchestrator | 14:27:22.489 STDOUT terraform:  + remote_group_id = (known after apply) 2025-01-16 14:27:22.489928 | orchestrator | 14:27:22.489 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-01-16 14:27:22.489967 | orchestrator | 14:27:22.489 STDOUT terraform:  + security_group_id = (known after apply) 2025-01-16 14:27:22.490006 | orchestrator | 14:27:22.489 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.490044 | orchestrator | 14:27:22.490 STDOUT terraform:  } 2025-01-16 14:27:22.490110 | orchestrator | 14:27:22.490 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-01-16 14:27:22.490171 | orchestrator | 14:27:22.490 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-01-16 14:27:22.490247 | orchestrator | 14:27:22.490 STDOUT terraform:  + direction = "ingress" 2025-01-16 14:27:22.490304 | orchestrator | 14:27:22.490 STDOUT terraform:  + ethertype = "IPv4" 2025-01-16 14:27:22.490346 | orchestrator | 14:27:22.490 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.490376 | orchestrator | 14:27:22.490 STDOUT terraform:  + protocol = "tcp" 2025-01-16 14:27:22.490416 | orchestrator | 14:27:22.490 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.490454 | orchestrator | 14:27:22.490 STDOUT terraform:  + remote_group_id = (known after apply) 2025-01-16 14:27:22.490499 | orchestrator | 14:27:22.490 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-01-16 14:27:22.490538 | orchestrator | 14:27:22.490 STDOUT terraform:  + security_group_id = (known after apply) 2025-01-16 14:27:22.490576 | orchestrator | 14:27:22.490 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.490598 | orchestrator | 14:27:22.490 STDOUT terraform:  } 2025-01-16 14:27:22.490659 | orchestrator | 14:27:22.490 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-01-16 14:27:22.490720 | orchestrator | 14:27:22.490 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-01-16 14:27:22.490754 | orchestrator | 14:27:22.490 STDOUT terraform:  + direction = "ingress" 2025-01-16 14:27:22.490783 | orchestrator | 14:27:22.490 STDOUT terraform:  + ethertype = "IPv4" 2025-01-16 14:27:22.490821 | orchestrator | 14:27:22.490 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.490850 | orchestrator | 14:27:22.490 STDOUT terraform:  + protocol = "udp" 2025-01-16 14:27:22.490892 | orchestrator | 14:27:22.490 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.490930 | orchestrator | 14:27:22.490 STDOUT terraform:  + remote_group_id = (known after apply) 2025-01-16 14:27:22.490979 | orchestrator | 14:27:22.490 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-01-16 14:27:22.491031 | orchestrator | 14:27:22.490 STDOUT terraform:  + security_group_id = (known after apply) 2025-01-16 14:27:22.491073 | orchestrator | 14:27:22.491 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.491097 | orchestrator | 14:27:22.491 STDOUT terraform:  } 2025-01-16 14:27:22.491160 | orchestrator | 14:27:22.491 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-01-16 14:27:22.491234 | orchestrator | 14:27:22.491 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-01-16 14:27:22.491273 | orchestrator | 14:27:22.491 STDOUT terraform:  + direction = "ingress" 2025-01-16 14:27:22.491302 | orchestrator | 14:27:22.491 STDOUT terraform:  + ethertype = "IPv4" 2025-01-16 14:27:22.491340 | orchestrator | 14:27:22.491 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.491369 | orchestrator | 14:27:22.491 STDOUT terraform:  + protocol = "icmp" 2025-01-16 14:27:22.491406 | orchestrator | 14:27:22.491 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.491443 | orchestrator | 14:27:22.491 STDOUT terraform:  + remote_group_id = (known after apply) 2025-01-16 14:27:22.491474 | orchestrator | 14:27:22.491 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-01-16 14:27:22.491511 | orchestrator | 14:27:22.491 STDOUT terraform:  + security_group_id = (known after apply) 2025-01-16 14:27:22.491550 | orchestrator | 14:27:22.491 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.491573 | orchestrator | 14:27:22.491 STDOUT terraform:  } 2025-01-16 14:27:22.491630 | orchestrator | 14:27:22.491 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-01-16 14:27:22.491687 | orchestrator | 14:27:22.491 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-01-16 14:27:22.491721 | orchestrator | 14:27:22.491 STDOUT terraform:  + direction = "ingress" 2025-01-16 14:27:22.491751 | orchestrator | 14:27:22.491 STDOUT terraform:  + ethertype = "IPv4" 2025-01-16 14:27:22.491788 | orchestrator | 14:27:22.491 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.491816 | orchestrator | 14:27:22.491 STDOUT terraform:  + protocol = "tcp" 2025-01-16 14:27:22.491854 | orchestrator | 14:27:22.491 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.491890 | orchestrator | 14:27:22.491 STDOUT terraform:  + remote_group_id = (known after apply) 2025-01-16 14:27:22.491921 | orchestrator | 14:27:22.491 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-01-16 14:27:22.491958 | orchestrator | 14:27:22.491 STDOUT terraform:  + security_group_id = (known after apply) 2025-01-16 14:27:22.491997 | orchestrator | 14:27:22.491 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.492026 | orchestrator | 14:27:22.492 STDOUT terraform:  } 2025-01-16 14:27:22.492082 | orchestrator | 14:27:22.492 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-01-16 14:27:22.492141 | orchestrator | 14:27:22.492 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-01-16 14:27:22.492179 | orchestrator | 14:27:22.492 STDOUT terraform:  + direction = "ingress" 2025-01-16 14:27:22.492244 | orchestrator | 14:27:22.492 STDOUT terraform:  + ethertype = "IPv4" 2025-01-16 14:27:22.492286 | orchestrator | 14:27:22.492 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.492316 | orchestrator | 14:27:22.492 STDOUT terraform:  + protocol = "udp" 2025-01-16 14:27:22.492371 | orchestrator | 14:27:22.492 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.492411 | orchestrator | 14:27:22.492 STDOUT terraform:  + remote_group_id = (known after apply) 2025-01-16 14:27:22.492445 | orchestrator | 14:27:22.492 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-01-16 14:27:22.492484 | orchestrator | 14:27:22.492 STDOUT terraform:  + security_group_id = (known after apply) 2025-01-16 14:27:22.492522 | orchestrator | 14:27:22.492 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.492545 | orchestrator | 14:27:22.492 STDOUT terraform:  } 2025-01-16 14:27:22.492604 | orchestrator | 14:27:22.492 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-01-16 14:27:22.492666 | orchestrator | 14:27:22.492 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-01-16 14:27:22.492700 | orchestrator | 14:27:22.492 STDOUT terraform:  + direction = "ingress" 2025-01-16 14:27:22.492730 | orchestrator | 14:27:22.492 STDOUT terraform:  + ethertype = "IPv4" 2025-01-16 14:27:22.492769 | orchestrator | 14:27:22.492 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.492799 | orchestrator | 14:27:22.492 STDOUT terraform:  + protocol = "icmp" 2025-01-16 14:27:22.492838 | orchestrator | 14:27:22.492 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.492877 | orchestrator | 14:27:22.492 STDOUT terraform:  + remote_group_id = (known after apply) 2025-01-16 14:27:22.492911 | orchestrator | 14:27:22.492 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-01-16 14:27:22.492949 | orchestrator | 14:27:22.492 STDOUT terraform:  + security_group_id = (known after apply) 2025-01-16 14:27:22.492988 | orchestrator | 14:27:22.492 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.493011 | orchestrator | 14:27:22.492 STDOUT terraform:  } 2025-01-16 14:27:22.493068 | orchestrator | 14:27:22.493 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-01-16 14:27:22.493126 | orchestrator | 14:27:22.493 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-01-16 14:27:22.493156 | orchestrator | 14:27:22.493 STDOUT terraform:  + description = "vrrp" 2025-01-16 14:27:22.493205 | orchestrator | 14:27:22.493 STDOUT terraform:  + direction = "ingress" 2025-01-16 14:27:22.493237 | orchestrator | 14:27:22.493 STDOUT terraform:  + ethertype = "IPv4" 2025-01-16 14:27:22.493277 | orchestrator | 14:27:22.493 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.493305 | orchestrator | 14:27:22.493 STDOUT terraform:  + protocol = "112" 2025-01-16 14:27:22.493349 | orchestrator | 14:27:22.493 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.493390 | orchestrator | 14:27:22.493 STDOUT terraform:  + remote_group_id = (known after apply) 2025-01-16 14:27:22.493423 | orchestrator | 14:27:22.493 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-01-16 14:27:22.493459 | orchestrator | 14:27:22.493 STDOUT terraform:  + security_group_id = (known after apply) 2025-01-16 14:27:22.493498 | orchestrator | 14:27:22.493 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.493527 | orchestrator | 14:27:22.493 STDOUT terraform:  } 2025-01-16 14:27:22.493585 | orchestrator | 14:27:22.493 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-01-16 14:27:22.493639 | orchestrator | 14:27:22.493 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-01-16 14:27:22.493675 | orchestrator | 14:27:22.493 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.493718 | orchestrator | 14:27:22.493 STDOUT terraform:  + description = "management security group" 2025-01-16 14:27:22.493755 | orchestrator | 14:27:22.493 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.493792 | orchestrator | 14:27:22.493 STDOUT terraform:  + name = "testbed-management" 2025-01-16 14:27:22.493828 | orchestrator | 14:27:22.493 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.493862 | orchestrator | 14:27:22.493 STDOUT terraform:  + stateful = (known after apply) 2025-01-16 14:27:22.493906 | orchestrator | 14:27:22.493 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.493929 | orchestrator | 14:27:22.493 STDOUT terraform:  } 2025-01-16 14:27:22.493987 | orchestrator | 14:27:22.493 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-01-16 14:27:22.494056 | orchestrator | 14:27:22.493 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-01-16 14:27:22.494095 | orchestrator | 14:27:22.494 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.494138 | orchestrator | 14:27:22.494 STDOUT terraform:  + description = "node security group" 2025-01-16 14:27:22.494176 | orchestrator | 14:27:22.494 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.494237 | orchestrator | 14:27:22.494 STDOUT terraform:  + name = "testbed-node" 2025-01-16 14:27:22.494280 | orchestrator | 14:27:22.494 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.494318 | orchestrator | 14:27:22.494 STDOUT terraform:  + stateful = (known after apply) 2025-01-16 14:27:22.494354 | orchestrator | 14:27:22.494 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.494376 | orchestrator | 14:27:22.494 STDOUT terraform:  } 2025-01-16 14:27:22.494426 | orchestrator | 14:27:22.494 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-01-16 14:27:22.494480 | orchestrator | 14:27:22.494 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-01-16 14:27:22.494517 | orchestrator | 14:27:22.494 STDOUT terraform:  + all_tags = (known after apply) 2025-01-16 14:27:22.494556 | orchestrator | 14:27:22.494 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-01-16 14:27:22.494592 | orchestrator | 14:27:22.494 STDOUT terraform:  + dns_nameservers = [ 2025-01-16 14:27:22.494619 | orchestrator | 14:27:22.494 STDOUT terraform:  + "8.8.8.8", 2025-01-16 14:27:22.494644 | orchestrator | 14:27:22.494 STDOUT terraform:  + "9.9.9.9", 2025-01-16 14:27:22.494675 | orchestrator | 14:27:22.494 STDOUT terraform:  ] 2025-01-16 14:27:22.494704 | orchestrator | 14:27:22.494 STDOUT terraform:  + enable_dhcp = true 2025-01-16 14:27:22.494744 | orchestrator | 14:27:22.494 STDOUT terraform:  + gateway_ip = (known after apply) 2025-01-16 14:27:22.494787 | orchestrator | 14:27:22.494 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.494815 | orchestrator | 14:27:22.494 STDOUT terraform:  + ip_version = 4 2025-01-16 14:27:22.494852 | orchestrator | 14:27:22.494 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-01-16 14:27:22.494889 | orchestrator | 14:27:22.494 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-01-16 14:27:22.494936 | orchestrator | 14:27:22.494 STDOUT terraform:  + name = "subnet-testbed-management" 2025-01-16 14:27:22.494991 | orchestrator | 14:27:22.494 STDOUT terraform:  + network_id = (known after apply) 2025-01-16 14:27:22.495021 | orchestrator | 14:27:22.495 STDOUT terraform:  + no_gateway = false 2025-01-16 14:27:22.495060 | orchestrator | 14:27:22.495 STDOUT terraform:  + region = (known after apply) 2025-01-16 14:27:22.495100 | orchestrator | 14:27:22.495 STDOUT terraform:  + service_types = (known after apply) 2025-01-16 14:27:22.495139 | orchestrator | 14:27:22.495 STDOUT terraform:  + tenant_id = (known after apply) 2025-01-16 14:27:22.495167 | orchestrator | 14:27:22.495 STDOUT terraform:  + allocation_pool { 2025-01-16 14:27:22.495214 | orchestrator | 14:27:22.495 STDOUT terraform:  + end = "192.168.31.250" 2025-01-16 14:27:22.495247 | orchestrator | 14:27:22.495 STDOUT terraform:  + start = "192.168.31.200" 2025-01-16 14:27:22.495269 | orchestrator | 14:27:22.495 STDOUT terraform:  } 2025-01-16 14:27:22.495289 | orchestrator | 14:27:22.495 STDOUT terraform:  } 2025-01-16 14:27:22.495322 | orchestrator | 14:27:22.495 STDOUT terraform:  # terraform_data.image will be created 2025-01-16 14:27:22.495356 | orchestrator | 14:27:22.495 STDOUT terraform:  + resource "terraform_data" "image" { 2025-01-16 14:27:22.495387 | orchestrator | 14:27:22.495 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.495418 | orchestrator | 14:27:22.495 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-01-16 14:27:22.495460 | orchestrator | 14:27:22.495 STDOUT terraform:  + output = (known after apply) 2025-01-16 14:27:22.495545 | orchestrator | 14:27:22.495 STDOUT terraform:  } 2025-01-16 14:27:22.495606 | orchestrator | 14:27:22.495 STDOUT terraform:  # terraform_data.image_node will be created 2025-01-16 14:27:22.495665 | orchestrator | 14:27:22.495 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-01-16 14:27:22.495717 | orchestrator | 14:27:22.495 STDOUT terraform:  + id = (known after apply) 2025-01-16 14:27:22.495764 | orchestrator | 14:27:22.495 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-01-16 14:27:22.495809 | orchestrator | 14:27:22.495 STDOUT terraform:  + output = (known after apply) 2025-01-16 14:27:22.495842 | orchestrator | 14:27:22.495 STDOUT terraform:  } 2025-01-16 14:27:22.495882 | orchestrator | 14:27:22.495 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-01-16 14:27:22.495905 | orchestrator | 14:27:22.495 STDOUT terraform: Changes to Outputs: 2025-01-16 14:27:22.495939 | orchestrator | 14:27:22.495 STDOUT terraform:  + manager_address = (sensitive value) 2025-01-16 14:27:22.495973 | orchestrator | 14:27:22.495 STDOUT terraform:  + private_key = (sensitive value) 2025-01-16 14:27:22.559158 | orchestrator | 14:27:22.558 STDOUT terraform: terraform_data.image: Creating... 2025-01-16 14:27:22.673713 | orchestrator | 14:27:22.672 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=5b856e6e-d9e2-c51e-5e8d-1ca22a8037a5] 2025-01-16 14:27:22.688433 | orchestrator | 14:27:22.672 STDOUT terraform: terraform_data.image_node: Creating... 2025-01-16 14:27:22.688554 | orchestrator | 14:27:22.673 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=ad1b7115-eca6-1fad-10f8-fbf2ea87f95a] 2025-01-16 14:27:22.688584 | orchestrator | 14:27:22.688 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-01-16 14:27:22.688751 | orchestrator | 14:27:22.688 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-01-16 14:27:22.694382 | orchestrator | 14:27:22.694 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-01-16 14:27:22.695278 | orchestrator | 14:27:22.695 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-01-16 14:27:22.698155 | orchestrator | 14:27:22.697 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-01-16 14:27:22.698298 | orchestrator | 14:27:22.698 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-01-16 14:27:22.699712 | orchestrator | 14:27:22.699 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-01-16 14:27:22.700263 | orchestrator | 14:27:22.700 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-01-16 14:27:22.700584 | orchestrator | 14:27:22.700 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-01-16 14:27:22.709123 | orchestrator | 14:27:22.708 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-01-16 14:27:23.141759 | orchestrator | 14:27:23.141 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-01-16 14:27:23.145568 | orchestrator | 14:27:23.145 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-01-16 14:27:23.150334 | orchestrator | 14:27:23.149 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-01-16 14:27:23.153963 | orchestrator | 14:27:23.153 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-01-16 14:27:23.776936 | orchestrator | 14:27:23.776 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-01-16 14:27:23.784498 | orchestrator | 14:27:23.784 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-01-16 14:27:28.590765 | orchestrator | 14:27:28.590 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=92b3b970-b6ab-4328-8e4d-331a200375cf] 2025-01-16 14:27:28.598848 | orchestrator | 14:27:28.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-01-16 14:27:32.696277 | orchestrator | 14:27:32.695 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-01-16 14:27:32.699336 | orchestrator | 14:27:32.699 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-01-16 14:27:32.699876 | orchestrator | 14:27:32.699 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-01-16 14:27:32.700357 | orchestrator | 14:27:32.700 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-01-16 14:27:32.703742 | orchestrator | 14:27:32.703 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-01-16 14:27:32.710106 | orchestrator | 14:27:32.709 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-01-16 14:27:33.150827 | orchestrator | 14:27:33.150 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-01-16 14:27:33.154948 | orchestrator | 14:27:33.154 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-01-16 14:27:33.298747 | orchestrator | 14:27:33.298 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=a3fa75ed-12ad-4d98-b1e3-06058efbf95a] 2025-01-16 14:27:33.304566 | orchestrator | 14:27:33.303 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-01-16 14:27:33.326515 | orchestrator | 14:27:33.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=0646438b-3566-4bd7-ac9f-c7444a60ff3f] 2025-01-16 14:27:33.332844 | orchestrator | 14:27:33.332 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-01-16 14:27:33.345803 | orchestrator | 14:27:33.345 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=a8ee2823-701b-4f46-84dc-c0a96e4e2751] 2025-01-16 14:27:33.354006 | orchestrator | 14:27:33.353 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=a1dc0d1a-c1df-431a-8f1d-6c726706706a] 2025-01-16 14:27:33.354815 | orchestrator | 14:27:33.354 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-01-16 14:27:33.360973 | orchestrator | 14:27:33.360 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-01-16 14:27:33.378595 | orchestrator | 14:27:33.378 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=d1e8c7e9-38c3-4780-8ab7-178f632f9eb8] 2025-01-16 14:27:33.379103 | orchestrator | 14:27:33.378 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=f7bd705e-b5e0-4446-bf55-1dfa4188ee04] 2025-01-16 14:27:33.387681 | orchestrator | 14:27:33.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-01-16 14:27:33.387814 | orchestrator | 14:27:33.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-01-16 14:27:33.408876 | orchestrator | 14:27:33.408 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=511497a6-ce11-47ca-8c02-acccaddecbc9] 2025-01-16 14:27:33.414743 | orchestrator | 14:27:33.414 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-01-16 14:27:33.434455 | orchestrator | 14:27:33.434 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=20cc569f-32be-418c-b198-01024ddefd54] 2025-01-16 14:27:33.440769 | orchestrator | 14:27:33.440 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-01-16 14:27:33.785315 | orchestrator | 14:27:33.784 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-01-16 14:27:33.971306 | orchestrator | 14:27:33.970 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=d740be6b-1b5d-4ad1-85aa-7275c0983c2d] 2025-01-16 14:27:33.981497 | orchestrator | 14:27:33.981 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-01-16 14:27:38.601779 | orchestrator | 14:27:38.601 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-01-16 14:27:38.758982 | orchestrator | 14:27:38.758 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=97685de2-31d7-40a6-8026-91294c9f6af1] 2025-01-16 14:27:38.764094 | orchestrator | 14:27:38.763 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-01-16 14:27:43.305044 | orchestrator | 14:27:43.304 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-01-16 14:27:43.334330 | orchestrator | 14:27:43.333 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-01-16 14:27:43.355486 | orchestrator | 14:27:43.355 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-01-16 14:27:43.362080 | orchestrator | 14:27:43.361 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-01-16 14:27:43.389369 | orchestrator | 14:27:43.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-01-16 14:27:43.389474 | orchestrator | 14:27:43.389 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-01-16 14:27:43.416573 | orchestrator | 14:27:43.416 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-01-16 14:27:43.441704 | orchestrator | 14:27:43.441 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-01-16 14:27:43.501041 | orchestrator | 14:27:43.500 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 11s [id=08ea27e5-46c9-491a-b107-4789383846f8] 2025-01-16 14:27:43.514774 | orchestrator | 14:27:43.514 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-01-16 14:27:43.535113 | orchestrator | 14:27:43.534 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=144e2d31-b1a0-45c5-bee7-951938d47a21] 2025-01-16 14:27:43.543501 | orchestrator | 14:27:43.543 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-01-16 14:27:43.574122 | orchestrator | 14:27:43.573 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=2e08297a-47bf-4aeb-9f10-e9a4d07161c8] 2025-01-16 14:27:43.581112 | orchestrator | 14:27:43.580 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-01-16 14:27:43.592950 | orchestrator | 14:27:43.592 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 11s [id=0aac5059-2a3a-4141-840f-fb09a7465e72] 2025-01-16 14:27:43.601510 | orchestrator | 14:27:43.601 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-01-16 14:27:43.679341 | orchestrator | 14:27:43.678 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=18d7b00d-e0db-4cba-a669-4d06ca6689ec] 2025-01-16 14:27:43.688362 | orchestrator | 14:27:43.688 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-01-16 14:27:43.775826 | orchestrator | 14:27:43.775 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 11s [id=72b30f3d-ea4f-4fbe-a722-d77662b0ee19] 2025-01-16 14:27:43.776876 | orchestrator | 14:27:43.776 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=5ec46ceb-bae7-4572-9a93-049002478163] 2025-01-16 14:27:43.794507 | orchestrator | 14:27:43.794 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-01-16 14:27:43.794774 | orchestrator | 14:27:43.794 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-01-16 14:27:43.800717 | orchestrator | 14:27:43.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=32889f36-f55b-4b84-b5ce-98c4b6c26bc3] 2025-01-16 14:27:43.800803 | orchestrator | 14:27:43.800 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=4d2d6e087976f2e86862364cf9bfa3a70fe9b372] 2025-01-16 14:27:43.803015 | orchestrator | 14:27:43.802 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=84696e1bb5b75bb6883395f28ff7063046927cb2] 2025-01-16 14:27:43.804529 | orchestrator | 14:27:43.804 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-01-16 14:27:43.982523 | orchestrator | 14:27:43.982 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-01-16 14:27:44.307459 | orchestrator | 14:27:44.307 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=f83c935f-85b0-44cd-99aa-a607723e7f03] 2025-01-16 14:27:48.765625 | orchestrator | 14:27:48.765 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-01-16 14:27:49.080780 | orchestrator | 14:27:49.080 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=bd92237a-eb9b-415b-89cb-c6ad5949ce4a] 2025-01-16 14:27:49.509742 | orchestrator | 14:27:49.509 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=61bb4648-6a15-4c42-ae36-a0f136b4fd7a] 2025-01-16 14:27:49.517858 | orchestrator | 14:27:49.517 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-01-16 14:27:53.516229 | orchestrator | 14:27:53.515 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-01-16 14:27:53.545178 | orchestrator | 14:27:53.544 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-01-16 14:27:53.582114 | orchestrator | 14:27:53.581 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-01-16 14:27:53.602568 | orchestrator | 14:27:53.602 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-01-16 14:27:53.689163 | orchestrator | 14:27:53.688 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-01-16 14:27:53.911935 | orchestrator | 14:27:53.910 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=7deb246b-94c7-4ccf-88e4-d5863b7b5cdf] 2025-01-16 14:27:53.955655 | orchestrator | 14:27:53.911 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=6fd23866-075a-4a28-b944-e328afaaaf4b] 2025-01-16 14:27:53.955748 | orchestrator | 14:27:53.955 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=899cd4fc-0b48-4c8f-9f9b-d306ab958cdd] 2025-01-16 14:27:53.957950 | orchestrator | 14:27:53.957 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=1282efbc-d021-4c81-8029-0b4a449576ff] 2025-01-16 14:27:54.033352 | orchestrator | 14:27:54.032 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=5f86695f-209f-4777-b6ae-0a791cf41e6e] 2025-01-16 14:27:56.919994 | orchestrator | 14:27:56.919 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=d7370862-4902-4dd0-882f-531d535fda60] 2025-01-16 14:27:56.924923 | orchestrator | 14:27:56.924 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-01-16 14:27:56.926495 | orchestrator | 14:27:56.924 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-01-16 14:27:56.926573 | orchestrator | 14:27:56.926 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-01-16 14:27:57.074411 | orchestrator | 14:27:57.074 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=d75c79ab-56ed-4856-a786-7df8de1ec205] 2025-01-16 14:27:57.091843 | orchestrator | 14:27:57.091 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-01-16 14:27:57.099171 | orchestrator | 14:27:57.098 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-01-16 14:27:57.099295 | orchestrator | 14:27:57.099 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-01-16 14:27:57.099319 | orchestrator | 14:27:57.099 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-01-16 14:27:57.102901 | orchestrator | 14:27:57.099 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-01-16 14:27:57.102972 | orchestrator | 14:27:57.099 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-01-16 14:27:57.102989 | orchestrator | 14:27:57.102 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-01-16 14:27:57.113738 | orchestrator | 14:27:57.102 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-01-16 14:27:57.113828 | orchestrator | 14:27:57.113 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=fc600bec-0ef4-49dc-b140-1fdaf991e234] 2025-01-16 14:27:57.122262 | orchestrator | 14:27:57.122 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-01-16 14:27:57.428861 | orchestrator | 14:27:57.428 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=5997b3da-90b5-479b-8336-d8a0e6a0324a] 2025-01-16 14:27:57.439673 | orchestrator | 14:27:57.439 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-01-16 14:27:57.654977 | orchestrator | 14:27:57.654 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=33779e29-5a08-4365-920e-ac65d30b56d7] 2025-01-16 14:27:57.665699 | orchestrator | 14:27:57.665 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-01-16 14:27:57.814588 | orchestrator | 14:27:57.814 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=41cdca4c-b8ae-4c31-847a-ceb574b3d7d4] 2025-01-16 14:27:57.822978 | orchestrator | 14:27:57.822 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-01-16 14:27:57.905310 | orchestrator | 14:27:57.904 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=ebaefd74-8c05-4c9c-bbdc-433e0a12f35d] 2025-01-16 14:27:57.912635 | orchestrator | 14:27:57.912 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-01-16 14:27:57.989584 | orchestrator | 14:27:57.989 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=8121cade-bf95-4e62-8189-1adab2c28373] 2025-01-16 14:27:57.995874 | orchestrator | 14:27:57.995 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-01-16 14:27:58.190942 | orchestrator | 14:27:58.190 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b983384a-58d5-41df-9388-be4658334a0b] 2025-01-16 14:27:58.196382 | orchestrator | 14:27:58.196 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-01-16 14:27:58.266262 | orchestrator | 14:27:58.265 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=f832a825-2a1e-4631-9fdf-a7bb55e381e1] 2025-01-16 14:27:58.279471 | orchestrator | 14:27:58.279 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-01-16 14:27:59.246352 | orchestrator | 14:27:59.246 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=76f28185-8269-4214-bd16-938bdf419e06] 2025-01-16 14:27:59.357679 | orchestrator | 14:27:59.357 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=e358d1c8-28d5-4325-9d52-97eca4edd2dd] 2025-01-16 14:28:02.673441 | orchestrator | 14:28:02.673 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=0b8f1ad3-fac4-4a00-ac2f-a962936826c4] 2025-01-16 14:28:02.931378 | orchestrator | 14:28:02.931 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=d5584d47-b980-4cb8-aa0f-38fc0d92811f] 2025-01-16 14:28:03.002633 | orchestrator | 14:28:03.002 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=9db6aea2-d862-4b7a-a967-afab6661c49e] 2025-01-16 14:28:03.295146 | orchestrator | 14:28:03.294 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=314d5586-9187-4875-bfcf-1a19ff4484a1] 2025-01-16 14:28:03.388843 | orchestrator | 14:28:03.388 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=07dfd23b-0cae-4ff4-bc2b-97abc3ab830e] 2025-01-16 14:28:03.784033 | orchestrator | 14:28:03.783 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=1c34293f-29c5-4250-ac78-2f28679bd3a7] 2025-01-16 14:28:03.851542 | orchestrator | 14:28:03.851 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 7s [id=38a9efc3-f26e-462a-9222-0aec05e6ee20] 2025-01-16 14:28:03.879871 | orchestrator | 14:28:03.879 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=a3d5a0d1-87c8-4c39-814c-8ff1f2d02972] 2025-01-16 14:28:03.900428 | orchestrator | 14:28:03.900 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-01-16 14:28:03.911870 | orchestrator | 14:28:03.909 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-01-16 14:28:03.926886 | orchestrator | 14:28:03.909 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-01-16 14:28:03.927017 | orchestrator | 14:28:03.926 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-01-16 14:28:03.927837 | orchestrator | 14:28:03.927 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-01-16 14:28:03.928947 | orchestrator | 14:28:03.928 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-01-16 14:28:03.938209 | orchestrator | 14:28:03.938 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-01-16 14:28:11.180140 | orchestrator | 14:28:11.179 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=fb7c7618-268c-41c9-8f96-c592776bc75f] 2025-01-16 14:28:11.193091 | orchestrator | 14:28:11.191 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-01-16 14:28:11.199145 | orchestrator | 14:28:11.198 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-01-16 14:28:11.199407 | orchestrator | 14:28:11.199 STDOUT terraform: local_file.inventory: Creating... 2025-01-16 14:28:11.207289 | orchestrator | 14:28:11.207 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=1086dd72664e4f05e22b2ce9e1334b378b8a32ac] 2025-01-16 14:28:11.208669 | orchestrator | 14:28:11.208 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=2619c78197d4042d83a99d0673380523c21fe188] 2025-01-16 14:28:11.697071 | orchestrator | 14:28:11.696 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=fb7c7618-268c-41c9-8f96-c592776bc75f] 2025-01-16 14:28:13.912803 | orchestrator | 14:28:13.912 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-01-16 14:28:13.912944 | orchestrator | 14:28:13.912 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-01-16 14:28:13.928585 | orchestrator | 14:28:13.928 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-01-16 14:28:13.928705 | orchestrator | 14:28:13.928 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-01-16 14:28:13.929790 | orchestrator | 14:28:13.929 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-01-16 14:28:13.939637 | orchestrator | 14:28:13.939 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-01-16 14:28:23.913650 | orchestrator | 14:28:23.913 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-01-16 14:28:23.914004 | orchestrator | 14:28:23.913 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-01-16 14:28:23.929675 | orchestrator | 14:28:23.929 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-01-16 14:28:23.929990 | orchestrator | 14:28:23.929 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-01-16 14:28:23.930238 | orchestrator | 14:28:23.930 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-01-16 14:28:23.940206 | orchestrator | 14:28:23.939 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-01-16 14:28:24.353619 | orchestrator | 14:28:24.353 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=550a4167-9d92-420b-b8be-8f28df19bb81] 2025-01-16 14:28:33.917285 | orchestrator | 14:28:33.917 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-01-16 14:28:33.917435 | orchestrator | 14:28:33.917 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-01-16 14:28:33.930689 | orchestrator | 14:28:33.930 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-01-16 14:28:33.930772 | orchestrator | 14:28:33.930 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-01-16 14:28:33.941060 | orchestrator | 14:28:33.940 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-01-16 14:28:34.405197 | orchestrator | 14:28:34.404 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=a8f888c5-0507-4294-acae-6aaf6cf6530b] 2025-01-16 14:28:34.480381 | orchestrator | 14:28:34.480 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=392012d4-3ce1-4d5d-900f-414c9724e168] 2025-01-16 14:28:34.481923 | orchestrator | 14:28:34.481 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=a819c596-3848-409e-8e1d-366159769730] 2025-01-16 14:28:34.522501 | orchestrator | 14:28:34.521 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=b04cbd54-d815-49f0-bf19-f698f69e4293] 2025-01-16 14:28:34.600847 | orchestrator | 14:28:34.600 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=f09ba4c6-572e-4a11-bd8d-20a1eec122aa] 2025-01-16 14:28:34.628114 | orchestrator | 14:28:34.627 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-01-16 14:28:34.628854 | orchestrator | 14:28:34.628 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-01-16 14:28:34.629702 | orchestrator | 14:28:34.629 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-01-16 14:28:34.634424 | orchestrator | 14:28:34.634 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-01-16 14:28:34.639318 | orchestrator | 14:28:34.639 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-01-16 14:28:34.645340 | orchestrator | 14:28:34.645 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8893049214941416945] 2025-01-16 14:28:34.646442 | orchestrator | 14:28:34.646 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-01-16 14:28:34.657677 | orchestrator | 14:28:34.657 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-01-16 14:28:34.658704 | orchestrator | 14:28:34.658 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-01-16 14:28:34.666201 | orchestrator | 14:28:34.666 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-01-16 14:28:34.668685 | orchestrator | 14:28:34.668 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-01-16 14:28:34.671607 | orchestrator | 14:28:34.671 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-01-16 14:28:39.945132 | orchestrator | 14:28:39.944 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=550a4167-9d92-420b-b8be-8f28df19bb81/08ea27e5-46c9-491a-b107-4789383846f8] 2025-01-16 14:28:39.960945 | orchestrator | 14:28:39.960 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=a8f888c5-0507-4294-acae-6aaf6cf6530b/d740be6b-1b5d-4ad1-85aa-7275c0983c2d] 2025-01-16 14:28:39.963943 | orchestrator | 14:28:39.963 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-01-16 14:28:39.970912 | orchestrator | 14:28:39.970 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-01-16 14:28:39.981633 | orchestrator | 14:28:39.981 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=550a4167-9d92-420b-b8be-8f28df19bb81/a8ee2823-701b-4f46-84dc-c0a96e4e2751] 2025-01-16 14:28:39.986565 | orchestrator | 14:28:39.986 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=a819c596-3848-409e-8e1d-366159769730/f7bd705e-b5e0-4446-bf55-1dfa4188ee04] 2025-01-16 14:28:39.993488 | orchestrator | 14:28:39.992 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-01-16 14:28:40.003320 | orchestrator | 14:28:40.003 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-01-16 14:28:40.009695 | orchestrator | 14:28:40.009 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=392012d4-3ce1-4d5d-900f-414c9724e168/144e2d31-b1a0-45c5-bee7-951938d47a21] 2025-01-16 14:28:40.015561 | orchestrator | 14:28:40.015 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=a8f888c5-0507-4294-acae-6aaf6cf6530b/97685de2-31d7-40a6-8026-91294c9f6af1] 2025-01-16 14:28:40.020490 | orchestrator | 14:28:40.020 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 5s [id=a819c596-3848-409e-8e1d-366159769730/511497a6-ce11-47ca-8c02-acccaddecbc9] 2025-01-16 14:28:40.020790 | orchestrator | 14:28:40.020 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-01-16 14:28:40.026410 | orchestrator | 14:28:40.026 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=f09ba4c6-572e-4a11-bd8d-20a1eec122aa/2e08297a-47bf-4aeb-9f10-e9a4d07161c8] 2025-01-16 14:28:40.027879 | orchestrator | 14:28:40.027 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=f09ba4c6-572e-4a11-bd8d-20a1eec122aa/a1dc0d1a-c1df-431a-8f1d-6c726706706a] 2025-01-16 14:28:40.029765 | orchestrator | 14:28:40.029 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-01-16 14:28:40.034618 | orchestrator | 14:28:40.034 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=a8f888c5-0507-4294-acae-6aaf6cf6530b/0aac5059-2a3a-4141-840f-fb09a7465e72] 2025-01-16 14:28:40.036195 | orchestrator | 14:28:40.035 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-01-16 14:28:40.045124 | orchestrator | 14:28:40.044 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-01-16 14:28:40.050938 | orchestrator | 14:28:40.050 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-01-16 14:28:45.243484 | orchestrator | 14:28:45.242 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=b04cbd54-d815-49f0-bf19-f698f69e4293/72b30f3d-ea4f-4fbe-a722-d77662b0ee19] 2025-01-16 14:28:45.272863 | orchestrator | 14:28:45.272 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=f09ba4c6-572e-4a11-bd8d-20a1eec122aa/5ec46ceb-bae7-4572-9a93-049002478163] 2025-01-16 14:28:45.293658 | orchestrator | 14:28:45.293 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=b04cbd54-d815-49f0-bf19-f698f69e4293/0646438b-3566-4bd7-ac9f-c7444a60ff3f] 2025-01-16 14:28:45.305196 | orchestrator | 14:28:45.304 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=550a4167-9d92-420b-b8be-8f28df19bb81/32889f36-f55b-4b84-b5ce-98c4b6c26bc3] 2025-01-16 14:28:45.336783 | orchestrator | 14:28:45.336 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=a819c596-3848-409e-8e1d-366159769730/d1e8c7e9-38c3-4780-8ab7-178f632f9eb8] 2025-01-16 14:28:45.338493 | orchestrator | 14:28:45.338 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=b04cbd54-d815-49f0-bf19-f698f69e4293/a3fa75ed-12ad-4d98-b1e3-06058efbf95a] 2025-01-16 14:28:45.375327 | orchestrator | 14:28:45.374 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=392012d4-3ce1-4d5d-900f-414c9724e168/18d7b00d-e0db-4cba-a669-4d06ca6689ec] 2025-01-16 14:28:45.388326 | orchestrator | 14:28:45.387 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=392012d4-3ce1-4d5d-900f-414c9724e168/20cc569f-32be-418c-b198-01024ddefd54] 2025-01-16 14:28:50.053102 | orchestrator | 14:28:50.052 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-01-16 14:28:54.533528 | orchestrator | 14:28:54.533 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 15s [id=20496a0d-05d8-4b24-a7a7-01088058180d] 2025-01-16 14:28:54.555088 | orchestrator | 14:28:54.554 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-01-16 14:28:54.564137 | orchestrator | 14:28:54.554 STDOUT terraform: Outputs: 2025-01-16 14:28:54.564233 | orchestrator | 14:28:54.554 STDOUT terraform: manager_address = 2025-01-16 14:28:54.564244 | orchestrator | 14:28:54.554 STDOUT terraform: private_key = 2025-01-16 14:28:55.004172 | orchestrator | changed 2025-01-16 14:28:55.032205 | 2025-01-16 14:28:55.032335 | TASK [Create infrastructure (stable)] 2025-01-16 14:28:55.131933 | orchestrator | skipping: Conditional result was False 2025-01-16 14:28:55.146365 | 2025-01-16 14:28:55.146487 | TASK [Fetch manager address] 2025-01-16 14:29:05.593461 | orchestrator | ok 2025-01-16 14:29:05.612016 | 2025-01-16 14:29:05.612169 | TASK [Set manager_host address] 2025-01-16 14:29:05.728305 | orchestrator | ok 2025-01-16 14:29:05.738280 | 2025-01-16 14:29:05.738395 | LOOP [Update ansible collections] 2025-01-16 14:29:09.240728 | orchestrator | changed 2025-01-16 14:29:10.374135 | orchestrator | changed 2025-01-16 14:29:10.404745 | 2025-01-16 14:29:10.404949 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-01-16 14:29:20.943808 | orchestrator | ok 2025-01-16 14:29:20.956070 | 2025-01-16 14:29:20.956178 | TASK [Wait a little longer for the manager so that everything is ready] 2025-01-16 14:30:21.005058 | orchestrator | ok 2025-01-16 14:30:21.015234 | 2025-01-16 14:30:21.015342 | TASK [Fetch manager ssh hostkey] 2025-01-16 14:30:22.062327 | orchestrator | Output suppressed because no_log was given 2025-01-16 14:30:22.081262 | 2025-01-16 14:30:22.081412 | TASK [Get ssh keypair from terraform environment] 2025-01-16 14:30:22.625528 | orchestrator | changed 2025-01-16 14:30:22.643322 | 2025-01-16 14:30:22.643461 | TASK [Point out that the following task takes some time and does not give any output] 2025-01-16 14:30:22.694670 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-01-16 14:30:22.705202 | 2025-01-16 14:30:22.705317 | TASK [Run manager part 0] 2025-01-16 14:30:23.671465 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-01-16 14:30:23.724254 | orchestrator | 2025-01-16 14:30:24.855992 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-01-16 14:30:24.856049 | orchestrator | 2025-01-16 14:30:24.856068 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-01-16 14:30:24.856082 | orchestrator | ok: [testbed-manager] 2025-01-16 14:30:26.233375 | orchestrator | 2025-01-16 14:30:26.233525 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-01-16 14:30:26.233544 | orchestrator | 2025-01-16 14:30:26.233552 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 14:30:26.233569 | orchestrator | ok: [testbed-manager] 2025-01-16 14:30:26.769223 | orchestrator | 2025-01-16 14:30:26.769281 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-01-16 14:30:26.769296 | orchestrator | ok: [testbed-manager] 2025-01-16 14:30:26.817049 | orchestrator | 2025-01-16 14:30:26.817108 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-01-16 14:30:26.817124 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:30:26.857939 | orchestrator | 2025-01-16 14:30:26.858040 | orchestrator | TASK [Update package cache] **************************************************** 2025-01-16 14:30:26.858065 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:30:26.895762 | orchestrator | 2025-01-16 14:30:26.895825 | orchestrator | TASK [Install required packages] *********************************************** 2025-01-16 14:30:26.895844 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:30:26.926666 | orchestrator | 2025-01-16 14:30:26.926748 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-01-16 14:30:26.926774 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:30:26.961419 | orchestrator | 2025-01-16 14:30:26.961495 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-01-16 14:30:26.961513 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:30:26.991997 | orchestrator | 2025-01-16 14:30:26.992091 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-01-16 14:30:26.992124 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:30:27.035633 | orchestrator | 2025-01-16 14:30:27.035699 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-01-16 14:30:27.035717 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:30:27.563771 | orchestrator | 2025-01-16 14:30:27.563849 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-01-16 14:30:27.563866 | orchestrator | changed: [testbed-manager] 2025-01-16 14:31:19.235873 | orchestrator | 2025-01-16 14:31:19.235971 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-01-16 14:31:19.236011 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:03.855396 | orchestrator | 2025-01-16 14:32:03.855526 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-01-16 14:32:03.855569 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:16.089035 | orchestrator | 2025-01-16 14:32:16.089169 | orchestrator | TASK [Install required packages] *********************************************** 2025-01-16 14:32:16.089231 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:21.012778 | orchestrator | 2025-01-16 14:32:21.012902 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-01-16 14:32:21.012920 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:21.051368 | orchestrator | 2025-01-16 14:32:21.051513 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-01-16 14:32:21.051551 | orchestrator | ok: [testbed-manager] 2025-01-16 14:32:21.670119 | orchestrator | 2025-01-16 14:32:21.670188 | orchestrator | TASK [Get current user] ******************************************************** 2025-01-16 14:32:21.670207 | orchestrator | ok: [testbed-manager] 2025-01-16 14:32:22.255609 | orchestrator | 2025-01-16 14:32:22.255751 | orchestrator | TASK [Create venv directory] *************************************************** 2025-01-16 14:32:22.255867 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:26.667407 | orchestrator | 2025-01-16 14:32:26.667498 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-01-16 14:32:26.667530 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:30.939315 | orchestrator | 2025-01-16 14:32:30.939422 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-01-16 14:32:30.939461 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:32.683984 | orchestrator | 2025-01-16 14:32:32.684075 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-01-16 14:32:32.684100 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:33.865467 | orchestrator | 2025-01-16 14:32:33.865610 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-01-16 14:32:33.865650 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:34.597205 | orchestrator | 2025-01-16 14:32:34.597301 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-01-16 14:32:34.597325 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-01-16 14:32:34.669597 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-01-16 14:32:34.669694 | orchestrator | 2025-01-16 14:32:34.669714 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-01-16 14:32:34.669741 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-01-16 14:32:37.066223 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-01-16 14:32:37.066296 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-01-16 14:32:37.066309 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-01-16 14:32:37.066328 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-01-16 14:32:37.441467 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-01-16 14:32:37.441546 | orchestrator | 2025-01-16 14:32:37.441559 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-01-16 14:32:37.441580 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:54.539301 | orchestrator | 2025-01-16 14:32:54.539389 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-01-16 14:32:54.539409 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-01-16 14:32:56.118090 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-01-16 14:32:56.118168 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-01-16 14:32:56.118178 | orchestrator | 2025-01-16 14:32:56.118187 | orchestrator | TASK [Install local collections] *********************************************** 2025-01-16 14:32:56.118204 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-01-16 14:32:56.984479 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-01-16 14:32:56.984573 | orchestrator | 2025-01-16 14:32:56.984584 | orchestrator | PLAY [Create operator user] **************************************************** 2025-01-16 14:32:56.984592 | orchestrator | 2025-01-16 14:32:56.984598 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 14:32:56.984616 | orchestrator | ok: [testbed-manager] 2025-01-16 14:32:57.021073 | orchestrator | 2025-01-16 14:32:57.021163 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-01-16 14:32:57.021188 | orchestrator | ok: [testbed-manager] 2025-01-16 14:32:57.089719 | orchestrator | 2025-01-16 14:32:57.089779 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-01-16 14:32:57.089795 | orchestrator | ok: [testbed-manager] 2025-01-16 14:32:57.701586 | orchestrator | 2025-01-16 14:32:57.701687 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-01-16 14:32:57.701724 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:58.197185 | orchestrator | 2025-01-16 14:32:58.197255 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-01-16 14:32:58.197271 | orchestrator | changed: [testbed-manager] 2025-01-16 14:32:59.099573 | orchestrator | 2025-01-16 14:32:59.099821 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-01-16 14:32:59.099896 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-01-16 14:33:00.179012 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-01-16 14:33:00.179098 | orchestrator | 2025-01-16 14:33:00.179111 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-01-16 14:33:00.179134 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:01.425231 | orchestrator | 2025-01-16 14:33:01.425292 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-01-16 14:33:01.425307 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-01-16 14:33:01.801926 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-01-16 14:33:01.802002 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-01-16 14:33:01.802012 | orchestrator | 2025-01-16 14:33:01.802057 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-01-16 14:33:01.802076 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:01.875318 | orchestrator | 2025-01-16 14:33:01.875563 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-01-16 14:33:01.875601 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:33:02.550216 | orchestrator | 2025-01-16 14:33:02.550279 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-01-16 14:33:02.550297 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:33:02.585472 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:02.585561 | orchestrator | 2025-01-16 14:33:02.585580 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-01-16 14:33:02.585606 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:33:02.618121 | orchestrator | 2025-01-16 14:33:02.618211 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-01-16 14:33:02.618231 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:33:02.645515 | orchestrator | 2025-01-16 14:33:02.645608 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-01-16 14:33:02.645631 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:33:02.695658 | orchestrator | 2025-01-16 14:33:02.695733 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-01-16 14:33:02.695759 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:33:03.191333 | orchestrator | 2025-01-16 14:33:03.191447 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-01-16 14:33:03.191489 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:04.080339 | orchestrator | 2025-01-16 14:33:04.080417 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-01-16 14:33:04.080431 | orchestrator | 2025-01-16 14:33:04.080441 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 14:33:04.080462 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:04.810839 | orchestrator | 2025-01-16 14:33:04.810926 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-01-16 14:33:04.810943 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:04.927900 | orchestrator | 2025-01-16 14:33:04.927991 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:33:04.928005 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-01-16 14:33:04.928015 | orchestrator | 2025-01-16 14:33:05.343157 | orchestrator | changed 2025-01-16 14:33:05.363294 | 2025-01-16 14:33:05.363428 | TASK [Point out that the log in on the manager is now possible] 2025-01-16 14:33:05.416779 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-01-16 14:33:05.428244 | 2025-01-16 14:33:05.428355 | TASK [Point out that the following task takes some time and does not give any output] 2025-01-16 14:33:05.479343 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-01-16 14:33:05.492264 | 2025-01-16 14:33:05.492391 | TASK [Run manager part 1 + 2] 2025-01-16 14:33:06.495473 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-01-16 14:33:06.559664 | orchestrator | 2025-01-16 14:33:08.236299 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-01-16 14:33:08.236392 | orchestrator | 2025-01-16 14:33:08.236426 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 14:33:08.236453 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:08.262800 | orchestrator | 2025-01-16 14:33:08.262943 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-01-16 14:33:08.262974 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:33:08.303230 | orchestrator | 2025-01-16 14:33:08.303319 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-01-16 14:33:08.303347 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:08.340509 | orchestrator | 2025-01-16 14:33:08.340584 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-01-16 14:33:08.340602 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:08.419843 | orchestrator | 2025-01-16 14:33:08.419947 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-01-16 14:33:08.419968 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:08.484412 | orchestrator | 2025-01-16 14:33:08.484490 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-01-16 14:33:08.484511 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:08.538840 | orchestrator | 2025-01-16 14:33:08.538980 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-01-16 14:33:08.539017 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-01-16 14:33:09.118793 | orchestrator | 2025-01-16 14:33:09.118886 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-01-16 14:33:09.118906 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:09.155285 | orchestrator | 2025-01-16 14:33:09.155358 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-01-16 14:33:09.155376 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:33:10.123448 | orchestrator | 2025-01-16 14:33:10.123545 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-01-16 14:33:10.123584 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:10.533253 | orchestrator | 2025-01-16 14:33:10.533332 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-01-16 14:33:10.533363 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:11.272285 | orchestrator | 2025-01-16 14:33:11.272410 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-01-16 14:33:11.272428 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:20.067775 | orchestrator | 2025-01-16 14:33:20.067989 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-01-16 14:33:20.068037 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:20.556679 | orchestrator | 2025-01-16 14:33:20.556793 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-01-16 14:33:20.556831 | orchestrator | ok: [testbed-manager] 2025-01-16 14:33:20.607204 | orchestrator | 2025-01-16 14:33:20.607287 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-01-16 14:33:20.607310 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:33:21.268086 | orchestrator | 2025-01-16 14:33:21.268177 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-01-16 14:33:21.268204 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:21.953694 | orchestrator | 2025-01-16 14:33:21.953849 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-01-16 14:33:21.953937 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:22.355323 | orchestrator | 2025-01-16 14:33:22.355415 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-01-16 14:33:22.355444 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:22.398545 | orchestrator | 2025-01-16 14:33:22.398776 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-01-16 14:33:22.398813 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-01-16 14:33:24.003679 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-01-16 14:33:24.003784 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-01-16 14:33:24.003804 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-01-16 14:33:24.003832 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:29.956925 | orchestrator | 2025-01-16 14:33:29.957006 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-01-16 14:33:29.957028 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-01-16 14:33:30.679569 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-01-16 14:33:30.679639 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-01-16 14:33:30.679648 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-01-16 14:33:30.679656 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-01-16 14:33:30.679663 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-01-16 14:33:30.679669 | orchestrator | 2025-01-16 14:33:30.679675 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-01-16 14:33:30.679701 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:30.716794 | orchestrator | 2025-01-16 14:33:30.716910 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-01-16 14:33:30.716945 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:33:31.851680 | orchestrator | 2025-01-16 14:33:31.851743 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-01-16 14:33:31.851760 | orchestrator | changed: [testbed-manager] 2025-01-16 14:33:31.888444 | orchestrator | 2025-01-16 14:33:31.888541 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-01-16 14:33:31.888571 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:34:30.472449 | orchestrator | 2025-01-16 14:34:30.472571 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-01-16 14:34:30.472587 | orchestrator | changed: [testbed-manager] 2025-01-16 14:34:31.186068 | orchestrator | 2025-01-16 14:34:31.186310 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-01-16 14:34:31.186345 | orchestrator | ok: [testbed-manager] 2025-01-16 14:34:31.317081 | orchestrator | 2025-01-16 14:34:31.317442 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:34:31.317518 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-01-16 14:34:31.317526 | orchestrator | 2025-01-16 14:34:31.585457 | orchestrator | changed 2025-01-16 14:34:31.604554 | 2025-01-16 14:34:31.604689 | TASK [Reboot manager] 2025-01-16 14:34:32.650364 | orchestrator | changed 2025-01-16 14:34:32.676748 | 2025-01-16 14:34:32.676950 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-01-16 14:34:43.055571 | orchestrator | ok 2025-01-16 14:34:43.066199 | 2025-01-16 14:34:43.066307 | TASK [Wait a little longer for the manager so that everything is ready] 2025-01-16 14:35:43.109562 | orchestrator | ok 2025-01-16 14:35:43.119811 | 2025-01-16 14:35:43.119960 | TASK [Deploy manager + bootstrap nodes] 2025-01-16 14:35:44.734255 | orchestrator | 2025-01-16 14:35:44.735239 | orchestrator | # DEPLOY MANAGER 2025-01-16 14:35:44.735311 | orchestrator | 2025-01-16 14:35:44.735336 | orchestrator | + set -e 2025-01-16 14:35:44.735387 | orchestrator | + echo 2025-01-16 14:35:44.735411 | orchestrator | + echo '# DEPLOY MANAGER' 2025-01-16 14:35:44.735430 | orchestrator | + echo 2025-01-16 14:35:44.735456 | orchestrator | + cat /opt/manager-vars.sh 2025-01-16 14:35:44.735496 | orchestrator | export NUMBER_OF_NODES=6 2025-01-16 14:35:44.735610 | orchestrator | 2025-01-16 14:35:44.735635 | orchestrator | export CEPH_VERSION=quincy 2025-01-16 14:35:44.735653 | orchestrator | export CONFIGURATION_VERSION=main 2025-01-16 14:35:44.735670 | orchestrator | export MANAGER_VERSION=latest 2025-01-16 14:35:44.735687 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-01-16 14:35:44.735704 | orchestrator | 2025-01-16 14:35:44.735722 | orchestrator | export ARA=false 2025-01-16 14:35:44.735739 | orchestrator | export TEMPEST=false 2025-01-16 14:35:44.735756 | orchestrator | export IS_ZUUL=true 2025-01-16 14:35:44.735773 | orchestrator | 2025-01-16 14:35:44.735791 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 14:35:44.735810 | orchestrator | export EXTERNAL_API=false 2025-01-16 14:35:44.735826 | orchestrator | 2025-01-16 14:35:44.735844 | orchestrator | export IMAGE_USER=ubuntu 2025-01-16 14:35:44.735861 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-01-16 14:35:44.735879 | orchestrator | 2025-01-16 14:35:44.735896 | orchestrator | export CEPH_STACK=ceph-ansible 2025-01-16 14:35:44.735913 | orchestrator | 2025-01-16 14:35:44.735930 | orchestrator | + echo 2025-01-16 14:35:44.735947 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-01-16 14:35:44.735970 | orchestrator | ++ export INTERACTIVE=false 2025-01-16 14:35:44.762877 | orchestrator | ++ INTERACTIVE=false 2025-01-16 14:35:44.763001 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-01-16 14:35:44.763034 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-01-16 14:35:44.763080 | orchestrator | + source /opt/manager-vars.sh 2025-01-16 14:35:44.763101 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-01-16 14:35:44.763120 | orchestrator | ++ NUMBER_OF_NODES=6 2025-01-16 14:35:44.763141 | orchestrator | ++ export CEPH_VERSION=quincy 2025-01-16 14:35:44.763152 | orchestrator | ++ CEPH_VERSION=quincy 2025-01-16 14:35:44.763164 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-01-16 14:35:44.763177 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-01-16 14:35:44.763202 | orchestrator | ++ export MANAGER_VERSION=latest 2025-01-16 14:35:44.763221 | orchestrator | ++ MANAGER_VERSION=latest 2025-01-16 14:35:44.763239 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-01-16 14:35:44.763257 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-01-16 14:35:44.763276 | orchestrator | ++ export ARA=false 2025-01-16 14:35:44.763296 | orchestrator | ++ ARA=false 2025-01-16 14:35:44.763316 | orchestrator | ++ export TEMPEST=false 2025-01-16 14:35:44.763334 | orchestrator | ++ TEMPEST=false 2025-01-16 14:35:44.763353 | orchestrator | ++ export IS_ZUUL=true 2025-01-16 14:35:44.763372 | orchestrator | ++ IS_ZUUL=true 2025-01-16 14:35:44.763390 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 14:35:44.763411 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 14:35:44.763438 | orchestrator | ++ export EXTERNAL_API=false 2025-01-16 14:35:44.763459 | orchestrator | ++ EXTERNAL_API=false 2025-01-16 14:35:44.763478 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-01-16 14:35:44.763498 | orchestrator | ++ IMAGE_USER=ubuntu 2025-01-16 14:35:44.763517 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-01-16 14:35:44.763537 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-01-16 14:35:44.763552 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-01-16 14:35:44.763567 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-01-16 14:35:44.763586 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-01-16 14:35:44.763636 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager-service.sh 2025-01-16 14:35:44.766408 | orchestrator | + set -e 2025-01-16 14:35:44.768428 | orchestrator | + source /opt/manager-vars.sh 2025-01-16 14:35:44.768500 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-01-16 14:35:44.768522 | orchestrator | ++ NUMBER_OF_NODES=6 2025-01-16 14:35:44.768542 | orchestrator | ++ export CEPH_VERSION=quincy 2025-01-16 14:35:44.768562 | orchestrator | ++ CEPH_VERSION=quincy 2025-01-16 14:35:44.768580 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-01-16 14:35:44.768593 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-01-16 14:35:44.768604 | orchestrator | ++ export MANAGER_VERSION=latest 2025-01-16 14:35:44.768615 | orchestrator | ++ MANAGER_VERSION=latest 2025-01-16 14:35:44.768627 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-01-16 14:35:44.768638 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-01-16 14:35:44.768649 | orchestrator | ++ export ARA=false 2025-01-16 14:35:44.768660 | orchestrator | ++ ARA=false 2025-01-16 14:35:44.768672 | orchestrator | ++ export TEMPEST=false 2025-01-16 14:35:44.768711 | orchestrator | ++ TEMPEST=false 2025-01-16 14:35:44.768723 | orchestrator | ++ export IS_ZUUL=true 2025-01-16 14:35:44.768734 | orchestrator | ++ IS_ZUUL=true 2025-01-16 14:35:44.768745 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 14:35:44.768756 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 14:35:44.768768 | orchestrator | ++ export EXTERNAL_API=false 2025-01-16 14:35:44.768779 | orchestrator | ++ EXTERNAL_API=false 2025-01-16 14:35:44.768790 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-01-16 14:35:44.768801 | orchestrator | ++ IMAGE_USER=ubuntu 2025-01-16 14:35:44.768812 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-01-16 14:35:44.768823 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-01-16 14:35:44.768834 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-01-16 14:35:44.768845 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-01-16 14:35:44.768856 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-01-16 14:35:44.768868 | orchestrator | ++ export INTERACTIVE=false 2025-01-16 14:35:44.768880 | orchestrator | ++ INTERACTIVE=false 2025-01-16 14:35:44.768891 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-01-16 14:35:44.768902 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-01-16 14:35:44.768912 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-01-16 14:35:44.768923 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-01-16 14:35:44.768934 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh quincy 2025-01-16 14:35:44.768958 | orchestrator | + set -e 2025-01-16 14:35:44.770984 | orchestrator | + VERSION=quincy 2025-01-16 14:35:44.771034 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-01-16 14:35:44.771096 | orchestrator | + [[ -n ceph_version: quincy ]] 2025-01-16 14:35:44.773429 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: quincy/g' /opt/configuration/environments/manager/configuration.yml 2025-01-16 14:35:44.773499 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.1 2025-01-16 14:35:44.775573 | orchestrator | + set -e 2025-01-16 14:35:44.775617 | orchestrator | + VERSION=2024.1 2025-01-16 14:35:44.775634 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-01-16 14:35:44.776763 | orchestrator | + [[ -n openstack_version: 2024.1 ]] 2025-01-16 14:35:44.778639 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.1/g' /opt/configuration/environments/manager/configuration.yml 2025-01-16 14:35:44.778696 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-01-16 14:35:44.779004 | orchestrator | ++ semver latest 7.0.0 2025-01-16 14:35:44.797946 | orchestrator | + [[ -1 -ge 0 ]] 2025-01-16 14:35:44.812302 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-01-16 14:35:44.812403 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-01-16 14:35:44.812414 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-01-16 14:35:44.812438 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-01-16 14:35:44.813345 | orchestrator | + source /opt/venv/bin/activate 2025-01-16 14:35:44.813966 | orchestrator | ++ deactivate nondestructive 2025-01-16 14:35:44.814129 | orchestrator | ++ '[' -n '' ']' 2025-01-16 14:35:44.814152 | orchestrator | ++ '[' -n '' ']' 2025-01-16 14:35:44.814158 | orchestrator | ++ hash -r 2025-01-16 14:35:44.814163 | orchestrator | ++ '[' -n '' ']' 2025-01-16 14:35:44.814168 | orchestrator | ++ unset VIRTUAL_ENV 2025-01-16 14:35:44.814173 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-01-16 14:35:44.814181 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-01-16 14:35:44.814187 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-01-16 14:35:44.814192 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-01-16 14:35:44.814197 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-01-16 14:35:44.814215 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-01-16 14:35:45.563489 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-01-16 14:35:45.563611 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-01-16 14:35:45.563626 | orchestrator | ++ export PATH 2025-01-16 14:35:45.563639 | orchestrator | ++ '[' -n '' ']' 2025-01-16 14:35:45.563651 | orchestrator | ++ '[' -z '' ']' 2025-01-16 14:35:45.563662 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-01-16 14:35:45.563676 | orchestrator | ++ PS1='(venv) ' 2025-01-16 14:35:45.563687 | orchestrator | ++ export PS1 2025-01-16 14:35:45.563697 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-01-16 14:35:45.563709 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-01-16 14:35:45.563720 | orchestrator | ++ hash -r 2025-01-16 14:35:45.563732 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-01-16 14:35:45.563791 | orchestrator | 2025-01-16 14:35:45.921812 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-01-16 14:35:45.921983 | orchestrator | 2025-01-16 14:35:45.922116 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-01-16 14:35:45.922157 | orchestrator | ok: [testbed-manager] 2025-01-16 14:35:46.566407 | orchestrator | 2025-01-16 14:35:46.566578 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-01-16 14:35:46.566645 | orchestrator | changed: [testbed-manager] 2025-01-16 14:35:48.073672 | orchestrator | 2025-01-16 14:35:48.073780 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-01-16 14:35:48.073792 | orchestrator | 2025-01-16 14:35:48.073801 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 14:35:48.073824 | orchestrator | ok: [testbed-manager] 2025-01-16 14:35:51.065265 | orchestrator | 2025-01-16 14:35:51.065407 | orchestrator | TASK [Pull images] ************************************************************* 2025-01-16 14:35:51.065452 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/ara-server:1.7.2) 2025-01-16 14:36:34.331054 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-01-16 14:36:34.331171 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/ceph-ansible:quincy) 2025-01-16 14:36:34.331180 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/inventory-reconciler:latest) 2025-01-16 14:36:34.331186 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/kolla-ansible:2024.1) 2025-01-16 14:36:34.331192 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.2-alpine) 2025-01-16 14:36:34.331198 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/netbox:v4.1.10) 2025-01-16 14:36:34.331203 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism-ansible:latest) 2025-01-16 14:36:34.331209 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism:latest) 2025-01-16 14:36:34.331214 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism-netbox:latest) 2025-01-16 14:36:34.331218 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-01-16 14:36:34.331223 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.3.1) 2025-01-16 14:36:34.331228 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.3) 2025-01-16 14:36:34.331233 | orchestrator | 2025-01-16 14:36:34.331239 | orchestrator | TASK [Check status] ************************************************************ 2025-01-16 14:36:34.331256 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-01-16 14:36:34.353205 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-01-16 14:36:34.353303 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j402472745145.1527', 'results_file': '/home/dragon/.ansible_async/j402472745145.1527', 'changed': True, 'item': 'quay.io/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353328 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j32624938635.1552', 'results_file': '/home/dragon/.ansible_async/j32624938635.1552', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353339 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-01-16 14:36:34.353348 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j132627099985.1577', 'results_file': '/home/dragon/.ansible_async/j132627099985.1577', 'changed': True, 'item': 'quay.io/osism/ceph-ansible:quincy', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353356 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j502689620394.1609', 'results_file': '/home/dragon/.ansible_async/j502689620394.1609', 'changed': True, 'item': 'quay.io/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353384 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-01-16 14:36:34.353392 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j914347351204.1641', 'results_file': '/home/dragon/.ansible_async/j914347351204.1641', 'changed': True, 'item': 'quay.io/osism/kolla-ansible:2024.1', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353399 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j413420724893.1673', 'results_file': '/home/dragon/.ansible_async/j413420724893.1673', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.2-alpine', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353410 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j816751225119.1705', 'results_file': '/home/dragon/.ansible_async/j816751225119.1705', 'changed': True, 'item': 'quay.io/osism/netbox:v4.1.10', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353418 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j20276424035.1737', 'results_file': '/home/dragon/.ansible_async/j20276424035.1737', 'changed': True, 'item': 'quay.io/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353426 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j221092085285.1769', 'results_file': '/home/dragon/.ansible_async/j221092085285.1769', 'changed': True, 'item': 'quay.io/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353433 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j832364960828.1801', 'results_file': '/home/dragon/.ansible_async/j832364960828.1801', 'changed': True, 'item': 'quay.io/osism/osism-netbox:latest', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353441 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j706842582746.1833', 'results_file': '/home/dragon/.ansible_async/j706842582746.1833', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353448 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j734224548042.1865', 'results_file': '/home/dragon/.ansible_async/j734224548042.1865', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.3.1', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353456 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j462161717329.1897', 'results_file': '/home/dragon/.ansible_async/j462161717329.1897', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.3', 'ansible_loop_var': 'item'}) 2025-01-16 14:36:34.353464 | orchestrator | 2025-01-16 14:36:34.353473 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-01-16 14:36:34.353492 | orchestrator | ok: [testbed-manager] 2025-01-16 14:36:34.659238 | orchestrator | 2025-01-16 14:36:34.659360 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-01-16 14:36:34.659396 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:34.860656 | orchestrator | 2025-01-16 14:36:34.860777 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-01-16 14:36:34.860843 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:35.051833 | orchestrator | 2025-01-16 14:36:35.051921 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-01-16 14:36:35.051940 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:35.067646 | orchestrator | 2025-01-16 14:36:35.067719 | orchestrator | TASK [Do not use Nexus for Ceph on CentOS] ************************************* 2025-01-16 14:36:35.067735 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:36:35.091596 | orchestrator | 2025-01-16 14:36:35.091681 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-01-16 14:36:35.091702 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:36:35.275575 | orchestrator | 2025-01-16 14:36:35.275722 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-01-16 14:36:35.275760 | orchestrator | ok: [testbed-manager] 2025-01-16 14:36:35.371011 | orchestrator | 2025-01-16 14:36:35.371154 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-01-16 14:36:35.371178 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:36:36.449193 | orchestrator | 2025-01-16 14:36:36.449283 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-01-16 14:36:36.449292 | orchestrator | 2025-01-16 14:36:36.449298 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 14:36:36.449328 | orchestrator | ok: [testbed-manager] 2025-01-16 14:36:36.589078 | orchestrator | 2025-01-16 14:36:36.589238 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-01-16 14:36:36.589270 | orchestrator | 2025-01-16 14:36:36.651548 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-01-16 14:36:36.651674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-01-16 14:36:37.292331 | orchestrator | 2025-01-16 14:36:37.292463 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-01-16 14:36:37.292504 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-01-16 14:36:38.366188 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-01-16 14:36:38.366301 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-01-16 14:36:38.366321 | orchestrator | 2025-01-16 14:36:38.366332 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-01-16 14:36:38.366357 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-01-16 14:36:38.755222 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-01-16 14:36:38.755333 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-01-16 14:36:38.755351 | orchestrator | 2025-01-16 14:36:38.755362 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-01-16 14:36:38.755387 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:36:39.146250 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:39.146372 | orchestrator | 2025-01-16 14:36:39.146390 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-01-16 14:36:39.146420 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:36:39.186901 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:39.186989 | orchestrator | 2025-01-16 14:36:39.187000 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-01-16 14:36:39.187022 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:36:39.408840 | orchestrator | 2025-01-16 14:36:39.408964 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-01-16 14:36:39.409001 | orchestrator | ok: [testbed-manager] 2025-01-16 14:36:39.473937 | orchestrator | 2025-01-16 14:36:39.474170 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-01-16 14:36:39.474227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-01-16 14:36:40.150388 | orchestrator | 2025-01-16 14:36:40.150489 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-01-16 14:36:40.150512 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:40.648521 | orchestrator | 2025-01-16 14:36:40.648666 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-01-16 14:36:40.649359 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:42.970551 | orchestrator | 2025-01-16 14:36:42.970687 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-01-16 14:36:42.970738 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:43.204179 | orchestrator | 2025-01-16 14:36:43.204291 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-01-16 14:36:43.204325 | orchestrator | 2025-01-16 14:36:43.275146 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-01-16 14:36:43.275245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-01-16 14:36:44.766441 | orchestrator | 2025-01-16 14:36:44.766566 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-01-16 14:36:44.766606 | orchestrator | ok: [testbed-manager] 2025-01-16 14:36:44.859657 | orchestrator | 2025-01-16 14:36:44.859756 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-01-16 14:36:44.859781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-01-16 14:36:45.556792 | orchestrator | 2025-01-16 14:36:45.556904 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-01-16 14:36:45.556933 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-01-16 14:36:45.624083 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-01-16 14:36:45.624228 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-01-16 14:36:45.624250 | orchestrator | 2025-01-16 14:36:45.624266 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-01-16 14:36:45.624300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-01-16 14:36:46.013268 | orchestrator | 2025-01-16 14:36:46.013427 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-01-16 14:36:46.013480 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-01-16 14:36:46.402260 | orchestrator | 2025-01-16 14:36:46.402358 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-01-16 14:36:46.402378 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:36:46.634008 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:46.634381 | orchestrator | 2025-01-16 14:36:46.634417 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-01-16 14:36:46.634468 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:46.836607 | orchestrator | 2025-01-16 14:36:46.836754 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-01-16 14:36:46.836789 | orchestrator | ok: [testbed-manager] 2025-01-16 14:36:46.867632 | orchestrator | 2025-01-16 14:36:46.867754 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-01-16 14:36:46.867792 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:36:47.250614 | orchestrator | 2025-01-16 14:36:47.250749 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-01-16 14:36:47.250786 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:47.319424 | orchestrator | 2025-01-16 14:36:47.319530 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-01-16 14:36:47.319557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-01-16 14:36:47.781201 | orchestrator | 2025-01-16 14:36:47.781324 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-01-16 14:36:47.781361 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-01-16 14:36:48.216740 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-01-16 14:36:48.216877 | orchestrator | 2025-01-16 14:36:48.216897 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-01-16 14:36:48.216929 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-01-16 14:36:48.628017 | orchestrator | 2025-01-16 14:36:48.628137 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-01-16 14:36:48.628165 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:48.659188 | orchestrator | 2025-01-16 14:36:48.659293 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-01-16 14:36:48.659323 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:36:49.036179 | orchestrator | 2025-01-16 14:36:49.036307 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-01-16 14:36:49.036344 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:50.106550 | orchestrator | 2025-01-16 14:36:50.106703 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-01-16 14:36:50.106805 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:36:53.689219 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:36:53.689343 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:36:53.689359 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:53.689374 | orchestrator | 2025-01-16 14:36:53.689387 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-01-16 14:36:53.689416 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-01-16 14:36:54.081307 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-01-16 14:36:54.081457 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-01-16 14:36:54.081488 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-01-16 14:36:54.081510 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-01-16 14:36:54.081533 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-01-16 14:36:54.081553 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-01-16 14:36:54.081574 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-01-16 14:36:54.081596 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-01-16 14:36:54.081619 | orchestrator | changed: [testbed-manager] => (item=users) 2025-01-16 14:36:54.081641 | orchestrator | 2025-01-16 14:36:54.081663 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-01-16 14:36:54.081707 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-01-16 14:36:54.189376 | orchestrator | 2025-01-16 14:36:54.189488 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-01-16 14:36:54.189519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-01-16 14:36:54.603639 | orchestrator | 2025-01-16 14:36:54.603765 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-01-16 14:36:54.603805 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:54.992639 | orchestrator | 2025-01-16 14:36:54.992740 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-01-16 14:36:54.992769 | orchestrator | ok: [testbed-manager] 2025-01-16 14:36:55.452628 | orchestrator | 2025-01-16 14:36:55.452737 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-01-16 14:36:55.452766 | orchestrator | changed: [testbed-manager] 2025-01-16 14:36:57.354669 | orchestrator | 2025-01-16 14:36:57.354764 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-01-16 14:36:57.354786 | orchestrator | ok: [testbed-manager] 2025-01-16 14:36:57.946667 | orchestrator | 2025-01-16 14:36:57.946764 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-01-16 14:36:57.946794 | orchestrator | ok: [testbed-manager] 2025-01-16 14:37:19.298468 | orchestrator | 2025-01-16 14:37:19.298642 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-01-16 14:37:19.298700 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-01-16 14:37:19.350320 | orchestrator | ok: [testbed-manager] 2025-01-16 14:37:19.350456 | orchestrator | 2025-01-16 14:37:19.350483 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-01-16 14:37:19.350523 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:37:19.380360 | orchestrator | 2025-01-16 14:37:19.380498 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-01-16 14:37:19.380523 | orchestrator | 2025-01-16 14:37:19.380540 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-01-16 14:37:19.380575 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:37:19.439634 | orchestrator | 2025-01-16 14:37:19.439785 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-01-16 14:37:19.439838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-01-16 14:37:19.962483 | orchestrator | 2025-01-16 14:37:19.962589 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-01-16 14:37:19.962614 | orchestrator | ok: [testbed-manager] 2025-01-16 14:37:20.014406 | orchestrator | 2025-01-16 14:37:20.014489 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-01-16 14:37:20.014507 | orchestrator | ok: [testbed-manager] 2025-01-16 14:37:20.050521 | orchestrator | 2025-01-16 14:37:20.050637 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-01-16 14:37:20.050684 | orchestrator | ok: [testbed-manager] => { 2025-01-16 14:37:20.484000 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-01-16 14:37:20.484189 | orchestrator | } 2025-01-16 14:37:20.484225 | orchestrator | 2025-01-16 14:37:20.484251 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-01-16 14:37:20.484297 | orchestrator | ok: [testbed-manager] 2025-01-16 14:37:21.058190 | orchestrator | 2025-01-16 14:37:21.058279 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-01-16 14:37:21.058298 | orchestrator | ok: [testbed-manager] 2025-01-16 14:37:21.105998 | orchestrator | 2025-01-16 14:37:21.106130 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-01-16 14:37:21.106176 | orchestrator | ok: [testbed-manager] 2025-01-16 14:37:21.139322 | orchestrator | 2025-01-16 14:37:21.139432 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-01-16 14:37:21.139459 | orchestrator | ok: [testbed-manager] => { 2025-01-16 14:37:21.176832 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-01-16 14:37:21.177018 | orchestrator | } 2025-01-16 14:37:21.177050 | orchestrator | 2025-01-16 14:37:21.177073 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-01-16 14:37:21.177118 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:37:21.214223 | orchestrator | 2025-01-16 14:37:21.214363 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-01-16 14:37:21.214400 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:37:21.250381 | orchestrator | 2025-01-16 14:37:21.250521 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-01-16 14:37:21.250558 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:37:21.287051 | orchestrator | 2025-01-16 14:37:21.287254 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-01-16 14:37:21.287306 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:37:21.322770 | orchestrator | 2025-01-16 14:37:21.322897 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-01-16 14:37:21.322931 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:37:21.381144 | orchestrator | 2025-01-16 14:37:21.381341 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-01-16 14:37:21.381385 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:37:22.238339 | orchestrator | 2025-01-16 14:37:22.238518 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-01-16 14:37:22.238559 | orchestrator | changed: [testbed-manager] 2025-01-16 14:37:22.324993 | orchestrator | 2025-01-16 14:37:22.325123 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-01-16 14:37:22.325196 | orchestrator | ok: [testbed-manager] 2025-01-16 14:38:22.362247 | orchestrator | 2025-01-16 14:38:22.362368 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-01-16 14:38:22.362404 | orchestrator | Pausing for 60 seconds 2025-01-16 14:38:22.427122 | orchestrator | changed: [testbed-manager] 2025-01-16 14:38:22.427280 | orchestrator | 2025-01-16 14:38:22.427299 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-01-16 14:38:22.427325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-01-16 14:40:15.612240 | orchestrator | 2025-01-16 14:40:15.612388 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-01-16 14:40:15.612412 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-01-16 14:40:16.802560 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-01-16 14:40:16.802680 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-01-16 14:40:16.802727 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-01-16 14:40:16.802741 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-01-16 14:40:16.802754 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-01-16 14:40:16.802767 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-01-16 14:40:16.802780 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-01-16 14:40:16.802793 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-01-16 14:40:16.802806 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-01-16 14:40:16.802819 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-01-16 14:40:16.802831 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:16.802844 | orchestrator | 2025-01-16 14:40:16.802859 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-01-16 14:40:16.802872 | orchestrator | 2025-01-16 14:40:16.802897 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 14:40:16.802926 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:16.870138 | orchestrator | 2025-01-16 14:40:16.870255 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-01-16 14:40:16.870332 | orchestrator | 2025-01-16 14:40:16.904528 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-01-16 14:40:16.904633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-01-16 14:40:17.905616 | orchestrator | 2025-01-16 14:40:17.905701 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-01-16 14:40:17.905721 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:17.929441 | orchestrator | 2025-01-16 14:40:17.929527 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-01-16 14:40:17.929552 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:17.984212 | orchestrator | 2025-01-16 14:40:17.984362 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-01-16 14:40:17.984401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-01-16 14:40:19.588697 | orchestrator | 2025-01-16 14:40:19.588818 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-01-16 14:40:19.588864 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-01-16 14:40:19.963838 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-01-16 14:40:19.963927 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-01-16 14:40:19.963935 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-01-16 14:40:19.963940 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-01-16 14:40:19.963946 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-01-16 14:40:19.963951 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-01-16 14:40:19.963956 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-01-16 14:40:19.963962 | orchestrator | 2025-01-16 14:40:19.963967 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-01-16 14:40:19.963983 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:20.012463 | orchestrator | 2025-01-16 14:40:20.012612 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-01-16 14:40:20.012650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-01-16 14:40:20.716840 | orchestrator | 2025-01-16 14:40:20.717963 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-01-16 14:40:20.718096 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-01-16 14:40:21.097425 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-01-16 14:40:21.097542 | orchestrator | 2025-01-16 14:40:21.097563 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-01-16 14:40:21.097598 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:21.128106 | orchestrator | 2025-01-16 14:40:21.128252 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-01-16 14:40:21.128402 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:40:21.167009 | orchestrator | 2025-01-16 14:40:21.167143 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-01-16 14:40:21.167191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-01-16 14:40:21.940297 | orchestrator | 2025-01-16 14:40:21.940500 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-01-16 14:40:21.940537 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:40:22.301961 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:40:22.302108 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:22.302123 | orchestrator | 2025-01-16 14:40:22.302135 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-01-16 14:40:22.302161 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:22.348676 | orchestrator | 2025-01-16 14:40:22.348946 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-01-16 14:40:22.349042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-01-16 14:40:22.708086 | orchestrator | 2025-01-16 14:40:22.708174 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-01-16 14:40:22.708195 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:40:23.066645 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:23.066742 | orchestrator | 2025-01-16 14:40:23.066752 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-01-16 14:40:23.066769 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:23.122854 | orchestrator | 2025-01-16 14:40:23.123022 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-01-16 14:40:23.123079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-01-16 14:40:24.462468 | orchestrator | 2025-01-16 14:40:24.462562 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-01-16 14:40:24.462585 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:25.679447 | orchestrator | 2025-01-16 14:40:25.679591 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-01-16 14:40:25.679647 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:26.409782 | orchestrator | 2025-01-16 14:40:26.409926 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-01-16 14:40:26.409980 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-01-16 14:40:26.784266 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-01-16 14:40:26.784415 | orchestrator | 2025-01-16 14:40:26.784436 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-01-16 14:40:26.784471 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:26.987172 | orchestrator | 2025-01-16 14:40:26.987295 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-01-16 14:40:26.987383 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:27.028853 | orchestrator | 2025-01-16 14:40:27.028991 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-01-16 14:40:27.029042 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:40:27.389641 | orchestrator | 2025-01-16 14:40:27.389763 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-01-16 14:40:27.389801 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:27.429043 | orchestrator | 2025-01-16 14:40:27.429160 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-01-16 14:40:27.429195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-01-16 14:40:27.451577 | orchestrator | 2025-01-16 14:40:27.451717 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-01-16 14:40:27.451768 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:28.614983 | orchestrator | 2025-01-16 14:40:28.615139 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-01-16 14:40:28.615173 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-01-16 14:40:29.028510 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-01-16 14:40:29.028618 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-01-16 14:40:29.028634 | orchestrator | 2025-01-16 14:40:29.028646 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-01-16 14:40:29.028674 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:29.070803 | orchestrator | 2025-01-16 14:40:29.070914 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-01-16 14:40:29.070950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-01-16 14:40:29.094692 | orchestrator | 2025-01-16 14:40:29.094781 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-01-16 14:40:29.094807 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:29.496836 | orchestrator | 2025-01-16 14:40:29.496950 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-01-16 14:40:29.496981 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-01-16 14:40:29.543686 | orchestrator | 2025-01-16 14:40:29.543801 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-01-16 14:40:29.543838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-01-16 14:40:29.958688 | orchestrator | 2025-01-16 14:40:29.958797 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-01-16 14:40:29.958828 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:30.337379 | orchestrator | 2025-01-16 14:40:30.337475 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-01-16 14:40:30.337501 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:30.362008 | orchestrator | 2025-01-16 14:40:30.362191 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-01-16 14:40:30.362229 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:40:30.388384 | orchestrator | 2025-01-16 14:40:30.388486 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-01-16 14:40:30.388518 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:30.901439 | orchestrator | 2025-01-16 14:40:30.901565 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-01-16 14:40:30.901603 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:51.493223 | orchestrator | 2025-01-16 14:40:51.493413 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-01-16 14:40:51.493459 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:51.872367 | orchestrator | 2025-01-16 14:40:51.872486 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-01-16 14:40:51.872518 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:58.809897 | orchestrator | 2025-01-16 14:40:58.810079 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-01-16 14:40:58.810177 | orchestrator | changed: [testbed-manager] 2025-01-16 14:40:58.844637 | orchestrator | 2025-01-16 14:40:58.844733 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-01-16 14:40:58.844758 | orchestrator | ok: [testbed-manager] 2025-01-16 14:40:58.881117 | orchestrator | 2025-01-16 14:40:58.881197 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-01-16 14:40:58.881206 | orchestrator | 2025-01-16 14:40:58.881212 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-01-16 14:40:58.881228 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:41:58.913031 | orchestrator | 2025-01-16 14:41:58.913178 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-01-16 14:41:58.913212 | orchestrator | Pausing for 60 seconds 2025-01-16 14:42:00.232470 | orchestrator | changed: [testbed-manager] 2025-01-16 14:42:00.232661 | orchestrator | 2025-01-16 14:42:00.232700 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-01-16 14:42:00.232752 | orchestrator | changed: [testbed-manager] 2025-01-16 14:42:20.848630 | orchestrator | 2025-01-16 14:42:20.848751 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-01-16 14:42:20.848788 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-01-16 14:42:24.187190 | orchestrator | changed: [testbed-manager] 2025-01-16 14:42:24.187318 | orchestrator | 2025-01-16 14:42:24.187340 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-01-16 14:42:24.187372 | orchestrator | changed: [testbed-manager] 2025-01-16 14:42:24.239289 | orchestrator | 2025-01-16 14:42:24.239463 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-01-16 14:42:24.239516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-01-16 14:42:24.271414 | orchestrator | 2025-01-16 14:42:24.271519 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-01-16 14:42:24.271530 | orchestrator | 2025-01-16 14:42:24.271537 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-01-16 14:42:24.271559 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:42:24.343584 | orchestrator | 2025-01-16 14:42:24.343689 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:42:24.343705 | orchestrator | testbed-manager : ok=103 changed=54 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0 2025-01-16 14:42:24.343714 | orchestrator | 2025-01-16 14:42:24.343737 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-01-16 14:42:24.346783 | orchestrator | + deactivate 2025-01-16 14:42:24.346872 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-01-16 14:42:24.346885 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-01-16 14:42:24.346894 | orchestrator | + export PATH 2025-01-16 14:42:24.346902 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-01-16 14:42:24.346911 | orchestrator | + '[' -n '' ']' 2025-01-16 14:42:24.346919 | orchestrator | + hash -r 2025-01-16 14:42:24.346927 | orchestrator | + '[' -n '' ']' 2025-01-16 14:42:24.346935 | orchestrator | + unset VIRTUAL_ENV 2025-01-16 14:42:24.346943 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-01-16 14:42:24.346952 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-01-16 14:42:24.346961 | orchestrator | + unset -f deactivate 2025-01-16 14:42:24.346970 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-01-16 14:42:24.346994 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-01-16 14:42:24.361042 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-01-16 14:42:24.361144 | orchestrator | + local max_attempts=60 2025-01-16 14:42:24.361156 | orchestrator | + local name=ceph-ansible 2025-01-16 14:42:24.361174 | orchestrator | + local attempt_num=1 2025-01-16 14:42:24.361183 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-01-16 14:42:24.361205 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-01-16 14:42:24.374203 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-01-16 14:42:24.374320 | orchestrator | + local max_attempts=60 2025-01-16 14:42:24.374333 | orchestrator | + local name=kolla-ansible 2025-01-16 14:42:24.374339 | orchestrator | + local attempt_num=1 2025-01-16 14:42:24.374345 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-01-16 14:42:24.374365 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-01-16 14:42:24.390010 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-01-16 14:42:24.390221 | orchestrator | + local max_attempts=60 2025-01-16 14:42:24.390253 | orchestrator | + local name=osism-ansible 2025-01-16 14:42:24.390275 | orchestrator | + local attempt_num=1 2025-01-16 14:42:24.390297 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-01-16 14:42:24.390341 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-01-16 14:42:24.733815 | orchestrator | + [[ true == \t\r\u\e ]] 2025-01-16 14:42:24.733913 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-01-16 14:42:24.733956 | orchestrator | ++ semver latest 8.0.0 2025-01-16 14:42:24.751102 | orchestrator | + [[ -1 -ge 0 ]] 2025-01-16 14:42:24.765974 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-01-16 14:42:24.766114 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-01-16 14:42:24.766127 | orchestrator | + local max_attempts=60 2025-01-16 14:42:24.766137 | orchestrator | + local name=netbox-netbox-1 2025-01-16 14:42:24.766147 | orchestrator | + local attempt_num=1 2025-01-16 14:42:24.766157 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-01-16 14:42:24.766180 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-01-16 14:42:24.768759 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-01-16 14:42:24.768830 | orchestrator | + set -e 2025-01-16 14:42:25.826217 | orchestrator | + osism netbox import 2025-01-16 14:42:25.826340 | orchestrator | 2025-01-16 14:42:25 | INFO  | Task 65b45d0a-12e7-4d81-a422-5770810672b7 is running. Wait. No more output. 2025-01-16 14:42:27.764183 | orchestrator | + osism netbox init 2025-01-16 14:42:28.741795 | orchestrator | 2025-01-16 14:42:28 | INFO  | Task e0ee7b90-c2c9-4e22-8066-8d5db0717d6e was prepared for execution. 2025-01-16 14:42:29.931053 | orchestrator | 2025-01-16 14:42:28 | INFO  | It takes a moment until task e0ee7b90-c2c9-4e22-8066-8d5db0717d6e has been started and output is visible here. 2025-01-16 14:42:29.931218 | orchestrator | 2025-01-16 14:42:30.530822 | orchestrator | PLAY [Wait for netbox service] ************************************************* 2025-01-16 14:42:30.530958 | orchestrator | 2025-01-16 14:42:30.530970 | orchestrator | TASK [Wait for netbox service] ************************************************* 2025-01-16 14:42:30.530990 | orchestrator | [WARNING]: Platform linux on host localhost is using the discovered Python 2025-01-16 14:42:30.532344 | orchestrator | interpreter at /usr/local/bin/python3.13, but future installation of another 2025-01-16 14:42:30.532359 | orchestrator | Python interpreter could change the meaning of that path. See 2025-01-16 14:42:30.532368 | orchestrator | https://docs.ansible.com/ansible- 2025-01-16 14:42:30.532379 | orchestrator | core/2.18/reference_appendices/interpreter_discovery.html for more information. 2025-01-16 14:42:30.535307 | orchestrator | ok: [localhost] 2025-01-16 14:42:30.535508 | orchestrator | 2025-01-16 14:42:30.535923 | orchestrator | PLAY [Manage sites and locations] ********************************************** 2025-01-16 14:42:30.536096 | orchestrator | 2025-01-16 14:42:30.536412 | orchestrator | TASK [Manage Discworld site] *************************************************** 2025-01-16 14:42:31.430711 | orchestrator | changed: [localhost] 2025-01-16 14:42:32.412325 | orchestrator | 2025-01-16 14:42:32.412492 | orchestrator | TASK [Manage Ankh-Morpork location] ******************************************** 2025-01-16 14:42:32.412527 | orchestrator | changed: [localhost] 2025-01-16 14:42:33.346907 | orchestrator | 2025-01-16 14:42:33.347046 | orchestrator | PLAY [Manage IP prefixes] ****************************************************** 2025-01-16 14:42:33.347796 | orchestrator | 2025-01-16 14:42:33.347820 | orchestrator | TASK [Manage 192.168.16.0/20] ************************************************** 2025-01-16 14:42:33.347851 | orchestrator | changed: [localhost] 2025-01-16 14:42:34.196319 | orchestrator | 2025-01-16 14:42:34.196569 | orchestrator | TASK [Manage 192.168.112.0/20] ************************************************* 2025-01-16 14:42:34.196609 | orchestrator | changed: [localhost] 2025-01-16 14:42:34.196691 | orchestrator | 2025-01-16 14:42:34.196708 | orchestrator | PLAY [Manage IP addresses] ***************************************************** 2025-01-16 14:42:34.196721 | orchestrator | 2025-01-16 14:42:34.196750 | orchestrator | TASK [Manage api.testbed.osism.xyz IP address] ********************************* 2025-01-16 14:42:35.043589 | orchestrator | changed: [localhost] 2025-01-16 14:42:35.787717 | orchestrator | 2025-01-16 14:42:35.787827 | orchestrator | TASK [Manage api-int.testbed.osism.xyz IP address] ***************************** 2025-01-16 14:42:35.787859 | orchestrator | changed: [localhost] 2025-01-16 14:42:35.962694 | orchestrator | 2025-01-16 14:42:35.962823 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:42:35.962849 | orchestrator | localhost : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:42:35.962870 | orchestrator | 2025-01-16 14:42:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:42:35.962923 | orchestrator | 2025-01-16 14:42:35 | INFO  | Please wait and do not abort execution. 2025-01-16 14:42:35.962957 | orchestrator | 2025-01-16 14:42:35.962997 | orchestrator | + osism netbox manage 1000 2025-01-16 14:42:36.915689 | orchestrator | 2025-01-16 14:42:36 | INFO  | Task 3097b98b-8bf6-4d2a-b34f-44ebec16cd91 was prepared for execution. 2025-01-16 14:42:38.088495 | orchestrator | 2025-01-16 14:42:36 | INFO  | It takes a moment until task 3097b98b-8bf6-4d2a-b34f-44ebec16cd91 has been started and output is visible here. 2025-01-16 14:42:38.088644 | orchestrator | 2025-01-16 14:42:39.100799 | orchestrator | PLAY [Manage rack 1000] ******************************************************** 2025-01-16 14:42:39.100935 | orchestrator | 2025-01-16 14:42:39.100957 | orchestrator | TASK [Manage rack 1000] ******************************************************** 2025-01-16 14:42:39.100990 | orchestrator | changed: [localhost] 2025-01-16 14:42:42.194279 | orchestrator | 2025-01-16 14:42:42.194448 | orchestrator | TASK [Manage testbed-switch-0] ************************************************* 2025-01-16 14:42:42.194479 | orchestrator | changed: [localhost] 2025-01-16 14:42:45.251893 | orchestrator | 2025-01-16 14:42:45.251985 | orchestrator | TASK [Manage testbed-switch-1] ************************************************* 2025-01-16 14:42:45.252068 | orchestrator | changed: [localhost] 2025-01-16 14:42:45.252286 | orchestrator | 2025-01-16 14:42:45.252303 | orchestrator | TASK [Manage testbed-switch-2] ************************************************* 2025-01-16 14:42:52.225356 | orchestrator | changed: [localhost] 2025-01-16 14:42:53.634006 | orchestrator | 2025-01-16 14:42:53.634255 | orchestrator | TASK [Manage testbed-manager] ************************************************** 2025-01-16 14:42:53.634301 | orchestrator | changed: [localhost] 2025-01-16 14:42:55.384229 | orchestrator | 2025-01-16 14:42:55.384325 | orchestrator | TASK [Manage testbed-node-0] *************************************************** 2025-01-16 14:42:55.384343 | orchestrator | changed: [localhost] 2025-01-16 14:42:56.746277 | orchestrator | 2025-01-16 14:42:56.746424 | orchestrator | TASK [Manage testbed-node-1] *************************************************** 2025-01-16 14:42:56.746448 | orchestrator | changed: [localhost] 2025-01-16 14:42:58.091028 | orchestrator | 2025-01-16 14:42:58.091205 | orchestrator | TASK [Manage testbed-node-2] *************************************************** 2025-01-16 14:42:58.091263 | orchestrator | changed: [localhost] 2025-01-16 14:42:59.413021 | orchestrator | 2025-01-16 14:42:59.413150 | orchestrator | TASK [Manage testbed-node-3] *************************************************** 2025-01-16 14:42:59.413196 | orchestrator | changed: [localhost] 2025-01-16 14:43:00.758313 | orchestrator | 2025-01-16 14:43:00.758529 | orchestrator | TASK [Manage testbed-node-4] *************************************************** 2025-01-16 14:43:00.758575 | orchestrator | changed: [localhost] 2025-01-16 14:43:02.060206 | orchestrator | 2025-01-16 14:43:02.060304 | orchestrator | TASK [Manage testbed-node-5] *************************************************** 2025-01-16 14:43:02.060331 | orchestrator | changed: [localhost] 2025-01-16 14:43:03.363023 | orchestrator | 2025-01-16 14:43:03.363758 | orchestrator | TASK [Manage testbed-node-6] *************************************************** 2025-01-16 14:43:03.363801 | orchestrator | changed: [localhost] 2025-01-16 14:43:04.886326 | orchestrator | 2025-01-16 14:43:04.886461 | orchestrator | TASK [Manage testbed-node-7] *************************************************** 2025-01-16 14:43:04.886483 | orchestrator | changed: [localhost] 2025-01-16 14:43:04.886693 | orchestrator | 2025-01-16 14:43:06.206765 | orchestrator | TASK [Manage testbed-node-8] *************************************************** 2025-01-16 14:43:06.206883 | orchestrator | changed: [localhost] 2025-01-16 14:43:06.207397 | orchestrator | 2025-01-16 14:43:07.511152 | orchestrator | TASK [Manage testbed-node-9] *************************************************** 2025-01-16 14:43:07.511273 | orchestrator | changed: [localhost] 2025-01-16 14:43:07.512114 | orchestrator | 2025-01-16 14:43:07.658497 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:43:07.658605 | orchestrator | localhost : ok=15 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:43:07.658625 | orchestrator | 2025-01-16 14:43:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:43:07.658641 | orchestrator | 2025-01-16 14:43:07 | INFO  | Please wait and do not abort execution. 2025-01-16 14:43:07.658691 | orchestrator | 2025-01-16 14:43:07.658722 | orchestrator | + osism netbox connect 1000 --state a 2025-01-16 14:43:08.646532 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task 96eba15f-57fb-4327-9e5a-61ccab160ff0 for device testbed-node-7 is running in background 2025-01-16 14:43:08.647004 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task d5a620c0-6527-4b21-a6b7-396ef3c5a8f6 for device testbed-node-8 is running in background 2025-01-16 14:43:08.648452 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task a81e4cc2-0863-47df-b355-4b93896c89e2 for device testbed-switch-1 is running in background 2025-01-16 14:43:08.650893 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task 200d439a-d934-4c6b-9776-611232b36cb8 for device testbed-node-9 is running in background 2025-01-16 14:43:08.653222 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task 42202080-d6b5-44c7-bf05-f551d8ad391a for device testbed-node-3 is running in background 2025-01-16 14:43:08.655724 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task 29e7c60d-7b5b-4f09-8798-b6592e0b8985 for device testbed-node-2 is running in background 2025-01-16 14:43:08.656324 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task 75e6c255-2bd4-4f53-905e-cea9703365ab for device testbed-node-5 is running in background 2025-01-16 14:43:08.658678 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task aeec86c7-b05e-4e34-8cde-7cc802c92f37 for device testbed-node-4 is running in background 2025-01-16 14:43:08.660574 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task 7d36527c-ee74-4f8b-a0d8-62551f7ad97a for device testbed-manager is running in background 2025-01-16 14:43:08.662543 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task 7b29dcbf-45b1-4075-962a-97949703cab4 for device testbed-switch-0 is running in background 2025-01-16 14:43:08.663343 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task 13fc72cb-ace4-493f-9213-6b8d0eccf6da for device testbed-switch-2 is running in background 2025-01-16 14:43:08.668102 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task 680f8934-e59f-48ff-a8e2-f388b0452a99 for device testbed-node-6 is running in background 2025-01-16 14:43:08.783622 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task b8cff1ee-b574-4053-9e48-6b6631ce603a for device testbed-node-0 is running in background 2025-01-16 14:43:08.783734 | orchestrator | 2025-01-16 14:43:08 | INFO  | Task fd8a36b5-c4a0-4e79-975f-9929880bf834 for device testbed-node-1 is running in background 2025-01-16 14:43:08.783752 | orchestrator | 2025-01-16 14:43:08 | INFO  | Tasks are running in background. No more output. Check Flower for logs. 2025-01-16 14:43:08.783783 | orchestrator | + osism netbox disable --no-wait testbed-switch-0 2025-01-16 14:43:09.935563 | orchestrator | + osism netbox disable --no-wait testbed-switch-1 2025-01-16 14:43:11.152569 | orchestrator | + osism netbox disable --no-wait testbed-switch-2 2025-01-16 14:43:12.328273 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-01-16 14:43:12.453710 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-01-16 14:43:12.464067 | orchestrator | ceph-ansible quay.io/osism/ceph-ansible:quincy "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-01-16 14:43:12.464185 | orchestrator | kolla-ansible quay.io/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-01-16 14:43:12.464205 | orchestrator | manager-api-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-01-16 14:43:12.464222 | orchestrator | manager-ara-server-1 quay.io/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-01-16 14:43:12.464269 | orchestrator | manager-beat-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" beat 2 minutes ago Up 2 minutes (healthy) 2025-01-16 14:43:12.464286 | orchestrator | manager-conductor-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" conductor 2 minutes ago Up 2 minutes (healthy) 2025-01-16 14:43:12.464301 | orchestrator | manager-flower-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" flower 2 minutes ago Up 2 minutes (healthy) 2025-01-16 14:43:12.464317 | orchestrator | manager-inventory_reconciler-1 quay.io/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 2 minutes (healthy) 2025-01-16 14:43:12.464332 | orchestrator | manager-listener-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" listener 2 minutes ago Up 2 minutes (healthy) 2025-01-16 14:43:12.464348 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-01-16 14:43:12.464363 | orchestrator | manager-netbox-1 quay.io/osism/osism-netbox:latest "/usr/bin/tini -- os…" netbox 2 minutes ago Up 2 minutes (healthy) 2025-01-16 14:43:12.464378 | orchestrator | manager-openstack-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-01-16 14:43:12.464394 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-01-16 14:43:12.464410 | orchestrator | manager-watchdog-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" watchdog 2 minutes ago Up 2 minutes (healthy) 2025-01-16 14:43:12.464459 | orchestrator | osism-ansible quay.io/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-01-16 14:43:12.464475 | orchestrator | osism-kubernetes quay.io/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-01-16 14:43:12.464490 | orchestrator | osismclient quay.io/osism/osism:latest "/usr/bin/tini -- sl…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-01-16 14:43:12.464526 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-01-16 14:43:12.590585 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-01-16 14:43:12.595213 | orchestrator | netbox-netbox-1 quay.io/osism/netbox:v4.1.10 "/usr/bin/tini -- /o…" netbox 6 minutes ago Up 5 minutes (healthy) 2025-01-16 14:43:12.595476 | orchestrator | netbox-netbox-worker-1 quay.io/osism/netbox:v4.1.10 "/opt/netbox/venv/bi…" netbox-worker 6 minutes ago Up 3 minutes (healthy) 2025-01-16 14:43:12.595514 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 6 minutes ago Up 5 minutes (healthy) 5432/tcp 2025-01-16 14:43:12.595536 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 6 minutes ago Up 5 minutes (healthy) 6379/tcp 2025-01-16 14:43:12.595576 | orchestrator | ++ semver latest 7.0.0 2025-01-16 14:43:12.633249 | orchestrator | + [[ -1 -ge 0 ]] 2025-01-16 14:43:12.637912 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-01-16 14:43:12.638007 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-01-16 14:43:12.638092 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-01-16 14:43:13.709293 | orchestrator | 2025-01-16 14:43:13 | INFO  | Task cbe3402b-8b9d-4776-949b-b0ea0e16cc5d (resolvconf) was prepared for execution. 2025-01-16 14:43:15.900557 | orchestrator | 2025-01-16 14:43:13 | INFO  | It takes a moment until task cbe3402b-8b9d-4776-949b-b0ea0e16cc5d (resolvconf) has been started and output is visible here. 2025-01-16 14:43:15.900838 | orchestrator | 2025-01-16 14:43:18.694952 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-01-16 14:43:18.695091 | orchestrator | 2025-01-16 14:43:18.695124 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 14:43:18.695147 | orchestrator | Thursday 16 January 2025 14:43:15 +0000 (0:00:00.065) 0:00:00.065 ****** 2025-01-16 14:43:18.695180 | orchestrator | ok: [testbed-manager] 2025-01-16 14:43:18.695657 | orchestrator | 2025-01-16 14:43:18.695695 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-01-16 14:43:18.695727 | orchestrator | Thursday 16 January 2025 14:43:18 +0000 (0:00:02.793) 0:00:02.859 ****** 2025-01-16 14:43:18.734459 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:43:18.794912 | orchestrator | 2025-01-16 14:43:18.795052 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-01-16 14:43:18.795073 | orchestrator | Thursday 16 January 2025 14:43:18 +0000 (0:00:00.039) 0:00:02.899 ****** 2025-01-16 14:43:18.795106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-01-16 14:43:18.843651 | orchestrator | 2025-01-16 14:43:18.843783 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-01-16 14:43:18.843804 | orchestrator | Thursday 16 January 2025 14:43:18 +0000 (0:00:00.060) 0:00:02.959 ****** 2025-01-16 14:43:18.843830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-01-16 14:43:18.845173 | orchestrator | 2025-01-16 14:43:19.601016 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-01-16 14:43:19.601121 | orchestrator | Thursday 16 January 2025 14:43:18 +0000 (0:00:00.050) 0:00:03.009 ****** 2025-01-16 14:43:19.601176 | orchestrator | ok: [testbed-manager] 2025-01-16 14:43:19.604486 | orchestrator | 2025-01-16 14:43:19.645189 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-01-16 14:43:19.645304 | orchestrator | Thursday 16 January 2025 14:43:19 +0000 (0:00:00.755) 0:00:03.764 ****** 2025-01-16 14:43:19.645338 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:43:20.074241 | orchestrator | 2025-01-16 14:43:20.074400 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-01-16 14:43:20.074520 | orchestrator | Thursday 16 January 2025 14:43:19 +0000 (0:00:00.041) 0:00:03.806 ****** 2025-01-16 14:43:20.074570 | orchestrator | ok: [testbed-manager] 2025-01-16 14:43:20.129847 | orchestrator | 2025-01-16 14:43:20.129976 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-01-16 14:43:20.129998 | orchestrator | Thursday 16 January 2025 14:43:20 +0000 (0:00:00.428) 0:00:04.234 ****** 2025-01-16 14:43:20.131591 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:43:20.526624 | orchestrator | 2025-01-16 14:43:20.526765 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-01-16 14:43:20.526779 | orchestrator | Thursday 16 January 2025 14:43:20 +0000 (0:00:00.060) 0:00:04.295 ****** 2025-01-16 14:43:20.526801 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:21.307015 | orchestrator | 2025-01-16 14:43:21.307145 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-01-16 14:43:21.307166 | orchestrator | Thursday 16 January 2025 14:43:20 +0000 (0:00:00.396) 0:00:04.692 ****** 2025-01-16 14:43:21.307200 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:22.022379 | orchestrator | 2025-01-16 14:43:22.022514 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-01-16 14:43:22.022561 | orchestrator | Thursday 16 January 2025 14:43:21 +0000 (0:00:00.778) 0:00:05.470 ****** 2025-01-16 14:43:22.022584 | orchestrator | ok: [testbed-manager] 2025-01-16 14:43:22.025671 | orchestrator | 2025-01-16 14:43:22.085939 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-01-16 14:43:22.086182 | orchestrator | Thursday 16 January 2025 14:43:22 +0000 (0:00:00.717) 0:00:06.187 ****** 2025-01-16 14:43:22.086216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-01-16 14:43:22.921895 | orchestrator | 2025-01-16 14:43:22.922073 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-01-16 14:43:22.922097 | orchestrator | Thursday 16 January 2025 14:43:22 +0000 (0:00:00.062) 0:00:06.250 ****** 2025-01-16 14:43:22.922130 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:22.923693 | orchestrator | 2025-01-16 14:43:22.923815 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:43:22.923833 | orchestrator | 2025-01-16 14:43:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:43:22.923849 | orchestrator | 2025-01-16 14:43:22 | INFO  | Please wait and do not abort execution. 2025-01-16 14:43:22.923872 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 14:43:22.924244 | orchestrator | 2025-01-16 14:43:22.924268 | orchestrator | 2025-01-16 14:43:22.924288 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:43:22.924498 | orchestrator | Thursday 16 January 2025 14:43:22 +0000 (0:00:00.836) 0:00:07.086 ****** 2025-01-16 14:43:22.924528 | orchestrator | =============================================================================== 2025-01-16 14:43:22.924620 | orchestrator | Gathering Facts --------------------------------------------------------- 2.79s 2025-01-16 14:43:22.924850 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 0.84s 2025-01-16 14:43:22.925025 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.78s 2025-01-16 14:43:22.925234 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.76s 2025-01-16 14:43:22.926901 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.72s 2025-01-16 14:43:22.928489 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.43s 2025-01-16 14:43:22.928540 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.40s 2025-01-16 14:43:22.928565 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.06s 2025-01-16 14:43:22.929471 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2025-01-16 14:43:23.172913 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.06s 2025-01-16 14:43:23.173034 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.05s 2025-01-16 14:43:23.173055 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.04s 2025-01-16 14:43:23.173071 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.04s 2025-01-16 14:43:23.173103 | orchestrator | + osism apply sshconfig 2025-01-16 14:43:24.199190 | orchestrator | 2025-01-16 14:43:24 | INFO  | Task a8479bca-ab24-4326-8305-6f74434e0fb2 (sshconfig) was prepared for execution. 2025-01-16 14:43:26.310927 | orchestrator | 2025-01-16 14:43:24 | INFO  | It takes a moment until task a8479bca-ab24-4326-8305-6f74434e0fb2 (sshconfig) has been started and output is visible here. 2025-01-16 14:43:26.311057 | orchestrator | 2025-01-16 14:43:26.743240 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-01-16 14:43:26.743361 | orchestrator | 2025-01-16 14:43:26.743536 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-01-16 14:43:26.743559 | orchestrator | Thursday 16 January 2025 14:43:26 +0000 (0:00:00.072) 0:00:00.072 ****** 2025-01-16 14:43:26.743593 | orchestrator | ok: [testbed-manager] 2025-01-16 14:43:27.074465 | orchestrator | 2025-01-16 14:43:27.074579 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-01-16 14:43:27.074592 | orchestrator | Thursday 16 January 2025 14:43:26 +0000 (0:00:00.433) 0:00:00.506 ****** 2025-01-16 14:43:27.074614 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:27.074747 | orchestrator | 2025-01-16 14:43:27.074980 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-01-16 14:43:31.122645 | orchestrator | Thursday 16 January 2025 14:43:27 +0000 (0:00:00.335) 0:00:00.841 ****** 2025-01-16 14:43:31.122798 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-01-16 14:43:31.124603 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-01-16 14:43:31.124679 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-01-16 14:43:31.124705 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-01-16 14:43:31.124730 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-01-16 14:43:31.124769 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-01-16 14:43:31.125063 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-01-16 14:43:31.126775 | orchestrator | 2025-01-16 14:43:31.175257 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-01-16 14:43:31.175375 | orchestrator | Thursday 16 January 2025 14:43:31 +0000 (0:00:04.046) 0:00:04.888 ****** 2025-01-16 14:43:31.175413 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:43:31.175838 | orchestrator | 2025-01-16 14:43:31.175869 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-01-16 14:43:31.175885 | orchestrator | Thursday 16 January 2025 14:43:31 +0000 (0:00:00.052) 0:00:04.940 ****** 2025-01-16 14:43:31.562603 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:31.562772 | orchestrator | 2025-01-16 14:43:31.562815 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:43:31.563009 | orchestrator | 2025-01-16 14:43:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:43:31.563177 | orchestrator | 2025-01-16 14:43:31 | INFO  | Please wait and do not abort execution. 2025-01-16 14:43:31.563528 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:43:31.563686 | orchestrator | 2025-01-16 14:43:31.564241 | orchestrator | 2025-01-16 14:43:31.564598 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:43:31.564837 | orchestrator | Thursday 16 January 2025 14:43:31 +0000 (0:00:00.388) 0:00:05.328 ****** 2025-01-16 14:43:31.564864 | orchestrator | =============================================================================== 2025-01-16 14:43:31.564884 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.05s 2025-01-16 14:43:31.565681 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.43s 2025-01-16 14:43:31.565790 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.39s 2025-01-16 14:43:31.565815 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.34s 2025-01-16 14:43:31.566005 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.05s 2025-01-16 14:43:31.816611 | orchestrator | + osism apply known-hosts 2025-01-16 14:43:32.838702 | orchestrator | 2025-01-16 14:43:32 | INFO  | Task 00092a54-9eb8-4697-b724-7e384bb8d219 (known-hosts) was prepared for execution. 2025-01-16 14:43:34.982572 | orchestrator | 2025-01-16 14:43:32 | INFO  | It takes a moment until task 00092a54-9eb8-4697-b724-7e384bb8d219 (known-hosts) has been started and output is visible here. 2025-01-16 14:43:34.982747 | orchestrator | 2025-01-16 14:43:34.986125 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-01-16 14:43:38.684116 | orchestrator | 2025-01-16 14:43:38.684262 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-01-16 14:43:38.684289 | orchestrator | Thursday 16 January 2025 14:43:34 +0000 (0:00:00.076) 0:00:00.076 ****** 2025-01-16 14:43:38.684330 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-01-16 14:43:38.684618 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-01-16 14:43:38.684643 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-01-16 14:43:38.684658 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-01-16 14:43:38.684673 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-01-16 14:43:38.684688 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-01-16 14:43:38.684709 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-01-16 14:43:38.780217 | orchestrator | 2025-01-16 14:43:38.780395 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-01-16 14:43:38.780465 | orchestrator | Thursday 16 January 2025 14:43:38 +0000 (0:00:03.703) 0:00:03.780 ****** 2025-01-16 14:43:38.780495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-01-16 14:43:38.780712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-01-16 14:43:38.780734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-01-16 14:43:38.780752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-01-16 14:43:38.781023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-01-16 14:43:38.781421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-01-16 14:43:38.781584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-01-16 14:43:38.781784 | orchestrator | 2025-01-16 14:43:38.782129 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:38.782389 | orchestrator | Thursday 16 January 2025 14:43:38 +0000 (0:00:00.099) 0:00:03.880 ****** 2025-01-16 14:43:39.502262 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIR7LwEHfAccU5M+j0BdWHKFTZOHTPOp26XBWtBtlkah4/HCKpm1/k9Dfn1GUDBrJekLUZBTbp+wN521BXwBPA36TZUv4drJYqIUrnBkoqpSSxCPqYQh2YvlC1jkYEjJ1AsKchjVnUzTSQAscAFF7lAWQFN2w+K6u1zfBCj8gTjCrqTVxrighme8uvad2gyGaOJrEivf0CNLuOMnMiWLpu5CZwmssdSKu1YiOk0CLS4QSZ135x3YkNXXeGe+r9BmdYUUnRUoqknIWvvjVWjE9RZrHig7r9dNmrW7Guh7NpojUvjhuvZB7lgEoQKMaUjElB/5Hk6cU0+qEehk6wXqgadL0qXj/N9FZbdeCOJV8TzSAtN7fiFncP17x/uDDNQFKlFChADDlaw9SPfbvdPw9HdCL5PvbGy9V+R1B2bMzMwm6XwoKVEY/WNcRtKdOT5iX6jr1/k+JKRAW1NoSw/FbAgFB8y6wfI10R5I9HduDUSN8hJnr0WTIYYbBgSESpB8E=) 2025-01-16 14:43:39.502705 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDuVp2Fo/C6kPRfGcLVXqi8ibCVLxhi12vCgvxHECI9TC3ASZGxhZRf6tM6ksZmAozTVO3Q+W9wudlogArVEhfc=) 2025-01-16 14:43:39.502749 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILp5R8bVpGfeu8tU8hBly8++rdZmae69rVrogG13GXBF) 2025-01-16 14:43:39.502786 | orchestrator | 2025-01-16 14:43:39.502807 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:40.134612 | orchestrator | Thursday 16 January 2025 14:43:39 +0000 (0:00:00.718) 0:00:04.598 ****** 2025-01-16 14:43:40.134730 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILPhxs2tUt5I3H5PdrGbWFyye/uPjrPYwUXYcsahukU2) 2025-01-16 14:43:40.134933 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDd0BNvImOtpGwr0wOGsgCEJmz0JiQylflCp/LoKrhAJpY3+uuYbEW9eMl8L0SbNUv3vb1EEEY3eNsE78rDXWJFW0JC5ivkDbVT+jum3jtk5l1PKd97P1+tIBDA7N1b+nGTcs5ktYTZVsjshuRrTaQo8CyKwL7C0dR68yl8Ftl3JhQISB0CkyNXf2Ucp3mAfcf5LzHzyGYBGjYMiDmFK4UjxHVezNpqdzAxSYiO0JSFgH63thc4aqHJUIpDPfplhPdC2eqX+H0Qc783ii//OycSZ7yDkaSTqYiiGtbHLR4A3fayspRl6uc0BIOaMhLJacA+KbxGKJrzB9CI/jXg0/GlR86R044aHGldsBqxfmsjsRtqtqDOnRMb9GcyHzCq+UZS8f/7IaSkIJ6hSPuAXkI/emHpp7iG4LtPNVkh0kd/cJ8gnp26lsITK+3h/hlVrG1InB4z9xhu3+0rEVVX4XRqdb82L+mcP4ee5umne7BQf87Hvc9To1UXrMVjT223eQs=) 2025-01-16 14:43:40.135017 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOhFIFLWdoVqwsMzfjf67g+r6gq7uNyz4sJ7Z5oLDBIdI95eNZrDQySR9tUNc7MAjqp/cj304v7NbQz2aICsV18=) 2025-01-16 14:43:40.135050 | orchestrator | 2025-01-16 14:43:40.135857 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:40.776425 | orchestrator | Thursday 16 January 2025 14:43:40 +0000 (0:00:00.634) 0:00:05.233 ****** 2025-01-16 14:43:40.776635 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSiP9mXC6VvMvHcUMGqDrDPLACtAx2iLjS/cvM+JQf592vsInGghSli7784SKT3sCPhl9fwPFyuZbMXRkd2nQhvhxYPh1yoltu/qcp60QboMGfba9Zpk6bbxHmYKcIkGvywxFHyDghZhnsrYKH1rHNkewjDAFzWboEDpflW7Us8eC4PVMx6lui1br6dWfNgA7vOfXoqUKE6eUgbpFrmpDzMP8vD4BYIiOYweaGoxSPlcXS7Kh/QCIE1n/5IssWpnawq2aBr1lO4DlaLCk/LMwp2qCIVx7pWxs+DlABsTEnysnnfbu4Z38YyQDtCqj0CClWw7a33a80Ez/qF/GP7E7ZJRkFuyEW6AQgFaCMJ+bcFXuXogp+pwI1z22NEgBan3S586FSjyhi8ufavVtARZyC+/bJnLLFYs2/iTaZl1iF/kGWLT4zTDdjybVcF5G/wlCYASDd2w6GISN2WAS7kAJ3l+bMmOv4egmYVPBUNOKXWGgYmlIupH3UJip53FX2NrE=) 2025-01-16 14:43:40.777778 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMJsnCVNZ7HO2haAlIWbtcTBWVtNPmmBknL+meeQLyrGsFbACnC4AqUguQU++2DM7mY1EHi3FgeEmCvNf4jYE6Y=) 2025-01-16 14:43:40.777844 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1GahoaDRSERN0cX5FMh0Uq5qzLJiKV4WjJ6wtfeSM4) 2025-01-16 14:43:40.777862 | orchestrator | 2025-01-16 14:43:40.777875 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:40.777897 | orchestrator | Thursday 16 January 2025 14:43:40 +0000 (0:00:00.643) 0:00:05.876 ****** 2025-01-16 14:43:41.420219 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG6IFkg0VirPLz4c7AuaVgHNPNBbHLl2rx66TO7vmUtmj3XYULXxYle6aYTzuOj6j3xy9NySqEE/IfTerEto6qw=) 2025-01-16 14:43:41.420832 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJY2B27ZIurnNe5tTatw1gDodyGztlMdDErnq9rkrZ+c) 2025-01-16 14:43:41.420938 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyJnYkugFPrcpTYQYCKTRZMRk/Y4UI1KerKqDnVpDFGH+HOJTtJGLn6OAFa+TIDd9JPIpIWZa6kfSYXPIu4zakY1FhK6I4Mt3sQPimdLsr0d8KgDpDBev62xEd/52RzAvfzgE/NvnnSV+LxJ0Np++1SMSBBNxfqjRmgkiFOkEtYIXK/8hXpSOr20c4LIgfEkTnrLEdclTqji7cDU0e2Z1E4jAWnasHMEfzQNyuvRDVpoivR1Qrkdrp9zJsILtqP8OAd0zHoPt0MBoozjxzsK95z6+S66143gGiFJzoxUcR1n+8KkGWqOHKd/kren9b+pfHtlK1KdfSFtgQR5uMjaDC6bsXDsyKt+U9ORTzPV5cHZP3+rIw3u7pmUq8aidfjYvb1uRKxvUoZCEg7nVyIZYXQlRTP0cgWvahItQo3nkeb8+8AVFIeHTDJkrfRUmLttzKQgmI3/R4EKoovIPOYopGC9zlEVvGIiztVBjPsPQRREt7Ci/PJ22chckdJl+j4IM=) 2025-01-16 14:43:41.420959 | orchestrator | 2025-01-16 14:43:41.420973 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:41.421055 | orchestrator | Thursday 16 January 2025 14:43:41 +0000 (0:00:00.640) 0:00:06.517 ****** 2025-01-16 14:43:42.056787 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1PXpGMwEqt0BKqgRjJxMAysPwc3umILsFMJLeyADqBY01qF/FBwmMP/vQbAkf5opmj+QMDAGXvzu6RYbEjtM5f3iaMKCaQrO6YuHMJMifnlUTbZ0lLN7C6VGGLotPvrCFti8eSDiarDdC4n3EnP8tgA8Up9zrOHycQI8hE9UAAoAXY2CbgjRy1YpkW8OH7778+CQ0617jZH+ynUKfd5m7/YV4UIYzuj7/ODjSGAa/zKa2QYmxgWZdveJxOXRd/044OsCTGagnl/rU3ZvdZcpjyF/rutWcC1iyK32Io0zbY6c3ut9CmdRcbx5hlDdAoTc0uoTNc8V0RyKSfWZYS8Uxy6IKZbjCRqxPgjiGSdsqSHps3sLAred8HXFStPzoIbtxei+zD6c/0Othw7el0IM8asRwehRrajhttckDaIzs+pKGElgKbahrdLh4HPDiAHyzUuFjwHHJXpWI9LL57ZY3kBgOP98JHIFfRG2cXv3Q4LBHUKUFekzU2ypHoVxPD4E=) 2025-01-16 14:43:42.056937 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWKQLTWRXPbhQdRO6moMfEqrWKJnkRpZvI8Uw0OiPImHWR6e6eaGhZ0a8IQa6mxNUPYo0zMmVAiRfVUnsapRHM=) 2025-01-16 14:43:42.056953 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA4GS719vN1JVWfqULdNzugIRfHxh6/S2kbtCSISjn73) 2025-01-16 14:43:42.056960 | orchestrator | 2025-01-16 14:43:42.057000 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:42.057019 | orchestrator | Thursday 16 January 2025 14:43:42 +0000 (0:00:00.637) 0:00:07.155 ****** 2025-01-16 14:43:42.697194 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcRM8jRWz+ZS0m/LCNeRoViGu8flwVg3EPXzctJe89qivPPsTWvPfo+pjVHP/zXvESX9tJTuwKHwSRjBey2vSWU9BCJpjwBlZSs/or5jQbJZCpFqHoKmIWo2DjowU039IUllt5r5Y6JYJiLg3TS0bL2v4b69nJEdA3VNvuBs9R0m4ONRIEPGi0g78BgvqbYfXwxHn4iR/on1MzyjjuKJz0I9kYXYunQQ2laseFE+kBL0HKykZdVVmXLx9dNmIqze5u4CBCXTngKYyuWA07NuNaIBRzqGbETW1e69Oq4m/jbeRz31f4HoVsr+8fuuzeZFtyRa/xzWOH8knGHO/o8tNHgPAuxtb/eH8nAAb6ZRKtrt9Ez6AfehmKHUIr789/uUg2h3TepiXJpEBgkGZ8bITJOoUX0Hj+B9gfCkyLY1as73rvmClHwbCoVphH5sTl6v72L6xrmAFF23UyE8KPM5HafNSAhwjsXMWd1EvzwkHg08AdB6IXt4Z5QP24X20+i2c=) 2025-01-16 14:43:42.697332 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPQy0wNq/VFAN1iInGq6ZEakKHLCjAdqk217CQAHyvp54IVs/p7Z4e4tlp9CTniR2+D1CW9P7+B+kroG8mdEufA=) 2025-01-16 14:43:42.697346 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGch+2b+l8CeQ75a8OovJTjMCdMXTHWBEbE530g/Xmy8) 2025-01-16 14:43:42.697357 | orchestrator | 2025-01-16 14:43:42.697369 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:42.697551 | orchestrator | Thursday 16 January 2025 14:43:42 +0000 (0:00:00.640) 0:00:07.795 ****** 2025-01-16 14:43:43.358921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSAS+NG1E2GmDcnKP3+xLTXHr/RAKS8jjMvJhjIY/XM8azvKQTWxF9D6xPlWgZgtnLn6Qms8iaemRt0xYjRq6uk/iiWkeAJe44OqCnEVl5SbshZsUM2KzBx5bW8Z9baCF3dD71PGAo6wJv0cRKBix8R0csJLiG1O9Wv6jYn14ZStVTLWxkXFWXIVBokqt00jR8wEqVY8RUoZwbXXc5ndgr03rlalJ7UkCHGUOuQvQ8wYprv+LYkccfzU4f4ISuD/vw3zYgU+vEonE3GKzf56Np2PlcBmYpQSOTHj+7BoweusiQmafqOsMZWRGMJUJz2Eh61JhTodJyVEU6nwzuSq3ZE3I+vH5V2OksruoNiCaIeySQRSW4aAqQFXOtAGaBnBqyfzSDk/xTlFmlJCbBIE6ItLQAVwAkoV1f4qdE1/xK4qwneulnAw3HZ38XrulJCOZoQNc1kFkkCL731YKmMymIj+BcB/606XvXFuOwxD8O97vAjXVAmo8zEGEM+A8VK+M=) 2025-01-16 14:43:43.359293 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN7Y+iR16UfFtEoMT8YivBjtQ75JenRuqpG/QTCzVzojxJNSc4r5/XfwfyWSrNJXtL39dLhl/opi0ZevgFndplk=) 2025-01-16 14:43:43.359327 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDr3CfEnGwBxBjZ8WaxxLgeH1HWa3yMAfNU5xLSSlXU8) 2025-01-16 14:43:43.359338 | orchestrator | 2025-01-16 14:43:43.359353 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-01-16 14:43:43.359605 | orchestrator | Thursday 16 January 2025 14:43:43 +0000 (0:00:00.661) 0:00:08.456 ****** 2025-01-16 14:43:46.423814 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-01-16 14:43:46.522694 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-01-16 14:43:46.522788 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-01-16 14:43:46.522798 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-01-16 14:43:46.522806 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-01-16 14:43:46.522813 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-01-16 14:43:46.522821 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-01-16 14:43:46.522828 | orchestrator | 2025-01-16 14:43:46.522837 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-01-16 14:43:46.522845 | orchestrator | Thursday 16 January 2025 14:43:46 +0000 (0:00:03.064) 0:00:11.521 ****** 2025-01-16 14:43:46.522866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-01-16 14:43:46.523033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-01-16 14:43:47.153030 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-01-16 14:43:47.153208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-01-16 14:43:47.153231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-01-16 14:43:47.153240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-01-16 14:43:47.153247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-01-16 14:43:47.153254 | orchestrator | 2025-01-16 14:43:47.153262 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:47.153270 | orchestrator | Thursday 16 January 2025 14:43:46 +0000 (0:00:00.100) 0:00:11.621 ****** 2025-01-16 14:43:47.153293 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIR7LwEHfAccU5M+j0BdWHKFTZOHTPOp26XBWtBtlkah4/HCKpm1/k9Dfn1GUDBrJekLUZBTbp+wN521BXwBPA36TZUv4drJYqIUrnBkoqpSSxCPqYQh2YvlC1jkYEjJ1AsKchjVnUzTSQAscAFF7lAWQFN2w+K6u1zfBCj8gTjCrqTVxrighme8uvad2gyGaOJrEivf0CNLuOMnMiWLpu5CZwmssdSKu1YiOk0CLS4QSZ135x3YkNXXeGe+r9BmdYUUnRUoqknIWvvjVWjE9RZrHig7r9dNmrW7Guh7NpojUvjhuvZB7lgEoQKMaUjElB/5Hk6cU0+qEehk6wXqgadL0qXj/N9FZbdeCOJV8TzSAtN7fiFncP17x/uDDNQFKlFChADDlaw9SPfbvdPw9HdCL5PvbGy9V+R1B2bMzMwm6XwoKVEY/WNcRtKdOT5iX6jr1/k+JKRAW1NoSw/FbAgFB8y6wfI10R5I9HduDUSN8hJnr0WTIYYbBgSESpB8E=) 2025-01-16 14:43:47.777825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDuVp2Fo/C6kPRfGcLVXqi8ibCVLxhi12vCgvxHECI9TC3ASZGxhZRf6tM6ksZmAozTVO3Q+W9wudlogArVEhfc=) 2025-01-16 14:43:47.777924 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILp5R8bVpGfeu8tU8hBly8++rdZmae69rVrogG13GXBF) 2025-01-16 14:43:47.777938 | orchestrator | 2025-01-16 14:43:47.777949 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:47.777959 | orchestrator | Thursday 16 January 2025 14:43:47 +0000 (0:00:00.630) 0:00:12.252 ****** 2025-01-16 14:43:47.777983 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDd0BNvImOtpGwr0wOGsgCEJmz0JiQylflCp/LoKrhAJpY3+uuYbEW9eMl8L0SbNUv3vb1EEEY3eNsE78rDXWJFW0JC5ivkDbVT+jum3jtk5l1PKd97P1+tIBDA7N1b+nGTcs5ktYTZVsjshuRrTaQo8CyKwL7C0dR68yl8Ftl3JhQISB0CkyNXf2Ucp3mAfcf5LzHzyGYBGjYMiDmFK4UjxHVezNpqdzAxSYiO0JSFgH63thc4aqHJUIpDPfplhPdC2eqX+H0Qc783ii//OycSZ7yDkaSTqYiiGtbHLR4A3fayspRl6uc0BIOaMhLJacA+KbxGKJrzB9CI/jXg0/GlR86R044aHGldsBqxfmsjsRtqtqDOnRMb9GcyHzCq+UZS8f/7IaSkIJ6hSPuAXkI/emHpp7iG4LtPNVkh0kd/cJ8gnp26lsITK+3h/hlVrG1InB4z9xhu3+0rEVVX4XRqdb82L+mcP4ee5umne7BQf87Hvc9To1UXrMVjT223eQs=) 2025-01-16 14:43:48.420856 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOhFIFLWdoVqwsMzfjf67g+r6gq7uNyz4sJ7Z5oLDBIdI95eNZrDQySR9tUNc7MAjqp/cj304v7NbQz2aICsV18=) 2025-01-16 14:43:48.420984 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILPhxs2tUt5I3H5PdrGbWFyye/uPjrPYwUXYcsahukU2) 2025-01-16 14:43:48.421005 | orchestrator | 2025-01-16 14:43:48.421023 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:48.421038 | orchestrator | Thursday 16 January 2025 14:43:47 +0000 (0:00:00.624) 0:00:12.876 ****** 2025-01-16 14:43:48.421071 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMJsnCVNZ7HO2haAlIWbtcTBWVtNPmmBknL+meeQLyrGsFbACnC4AqUguQU++2DM7mY1EHi3FgeEmCvNf4jYE6Y=) 2025-01-16 14:43:49.049434 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSiP9mXC6VvMvHcUMGqDrDPLACtAx2iLjS/cvM+JQf592vsInGghSli7784SKT3sCPhl9fwPFyuZbMXRkd2nQhvhxYPh1yoltu/qcp60QboMGfba9Zpk6bbxHmYKcIkGvywxFHyDghZhnsrYKH1rHNkewjDAFzWboEDpflW7Us8eC4PVMx6lui1br6dWfNgA7vOfXoqUKE6eUgbpFrmpDzMP8vD4BYIiOYweaGoxSPlcXS7Kh/QCIE1n/5IssWpnawq2aBr1lO4DlaLCk/LMwp2qCIVx7pWxs+DlABsTEnysnnfbu4Z38YyQDtCqj0CClWw7a33a80Ez/qF/GP7E7ZJRkFuyEW6AQgFaCMJ+bcFXuXogp+pwI1z22NEgBan3S586FSjyhi8ufavVtARZyC+/bJnLLFYs2/iTaZl1iF/kGWLT4zTDdjybVcF5G/wlCYASDd2w6GISN2WAS7kAJ3l+bMmOv4egmYVPBUNOKXWGgYmlIupH3UJip53FX2NrE=) 2025-01-16 14:43:49.049633 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1GahoaDRSERN0cX5FMh0Uq5qzLJiKV4WjJ6wtfeSM4) 2025-01-16 14:43:49.049659 | orchestrator | 2025-01-16 14:43:49.049676 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:49.049692 | orchestrator | Thursday 16 January 2025 14:43:48 +0000 (0:00:00.638) 0:00:13.514 ****** 2025-01-16 14:43:49.049725 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJY2B27ZIurnNe5tTatw1gDodyGztlMdDErnq9rkrZ+c) 2025-01-16 14:43:49.680085 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyJnYkugFPrcpTYQYCKTRZMRk/Y4UI1KerKqDnVpDFGH+HOJTtJGLn6OAFa+TIDd9JPIpIWZa6kfSYXPIu4zakY1FhK6I4Mt3sQPimdLsr0d8KgDpDBev62xEd/52RzAvfzgE/NvnnSV+LxJ0Np++1SMSBBNxfqjRmgkiFOkEtYIXK/8hXpSOr20c4LIgfEkTnrLEdclTqji7cDU0e2Z1E4jAWnasHMEfzQNyuvRDVpoivR1Qrkdrp9zJsILtqP8OAd0zHoPt0MBoozjxzsK95z6+S66143gGiFJzoxUcR1n+8KkGWqOHKd/kren9b+pfHtlK1KdfSFtgQR5uMjaDC6bsXDsyKt+U9ORTzPV5cHZP3+rIw3u7pmUq8aidfjYvb1uRKxvUoZCEg7nVyIZYXQlRTP0cgWvahItQo3nkeb8+8AVFIeHTDJkrfRUmLttzKQgmI3/R4EKoovIPOYopGC9zlEVvGIiztVBjPsPQRREt7Ci/PJ22chckdJl+j4IM=) 2025-01-16 14:43:49.680207 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG6IFkg0VirPLz4c7AuaVgHNPNBbHLl2rx66TO7vmUtmj3XYULXxYle6aYTzuOj6j3xy9NySqEE/IfTerEto6qw=) 2025-01-16 14:43:49.680228 | orchestrator | 2025-01-16 14:43:49.680244 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:49.680260 | orchestrator | Thursday 16 January 2025 14:43:49 +0000 (0:00:00.631) 0:00:14.146 ****** 2025-01-16 14:43:49.680293 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1PXpGMwEqt0BKqgRjJxMAysPwc3umILsFMJLeyADqBY01qF/FBwmMP/vQbAkf5opmj+QMDAGXvzu6RYbEjtM5f3iaMKCaQrO6YuHMJMifnlUTbZ0lLN7C6VGGLotPvrCFti8eSDiarDdC4n3EnP8tgA8Up9zrOHycQI8hE9UAAoAXY2CbgjRy1YpkW8OH7778+CQ0617jZH+ynUKfd5m7/YV4UIYzuj7/ODjSGAa/zKa2QYmxgWZdveJxOXRd/044OsCTGagnl/rU3ZvdZcpjyF/rutWcC1iyK32Io0zbY6c3ut9CmdRcbx5hlDdAoTc0uoTNc8V0RyKSfWZYS8Uxy6IKZbjCRqxPgjiGSdsqSHps3sLAred8HXFStPzoIbtxei+zD6c/0Othw7el0IM8asRwehRrajhttckDaIzs+pKGElgKbahrdLh4HPDiAHyzUuFjwHHJXpWI9LL57ZY3kBgOP98JHIFfRG2cXv3Q4LBHUKUFekzU2ypHoVxPD4E=) 2025-01-16 14:43:50.312792 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWKQLTWRXPbhQdRO6moMfEqrWKJnkRpZvI8Uw0OiPImHWR6e6eaGhZ0a8IQa6mxNUPYo0zMmVAiRfVUnsapRHM=) 2025-01-16 14:43:50.420143 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA4GS719vN1JVWfqULdNzugIRfHxh6/S2kbtCSISjn73) 2025-01-16 14:43:50.420237 | orchestrator | 2025-01-16 14:43:50.420256 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:50.420273 | orchestrator | Thursday 16 January 2025 14:43:49 +0000 (0:00:00.632) 0:00:14.778 ****** 2025-01-16 14:43:50.420311 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcRM8jRWz+ZS0m/LCNeRoViGu8flwVg3EPXzctJe89qivPPsTWvPfo+pjVHP/zXvESX9tJTuwKHwSRjBey2vSWU9BCJpjwBlZSs/or5jQbJZCpFqHoKmIWo2DjowU039IUllt5r5Y6JYJiLg3TS0bL2v4b69nJEdA3VNvuBs9R0m4ONRIEPGi0g78BgvqbYfXwxHn4iR/on1MzyjjuKJz0I9kYXYunQQ2laseFE+kBL0HKykZdVVmXLx9dNmIqze5u4CBCXTngKYyuWA07NuNaIBRzqGbETW1e69Oq4m/jbeRz31f4HoVsr+8fuuzeZFtyRa/xzWOH8knGHO/o8tNHgPAuxtb/eH8nAAb6ZRKtrt9Ez6AfehmKHUIr789/uUg2h3TepiXJpEBgkGZ8bITJOoUX0Hj+B9gfCkyLY1as73rvmClHwbCoVphH5sTl6v72L6xrmAFF23UyE8KPM5HafNSAhwjsXMWd1EvzwkHg08AdB6IXt4Z5QP24X20+i2c=) 2025-01-16 14:43:50.938161 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPQy0wNq/VFAN1iInGq6ZEakKHLCjAdqk217CQAHyvp54IVs/p7Z4e4tlp9CTniR2+D1CW9P7+B+kroG8mdEufA=) 2025-01-16 14:43:50.938385 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGch+2b+l8CeQ75a8OovJTjMCdMXTHWBEbE530g/Xmy8) 2025-01-16 14:43:50.938413 | orchestrator | 2025-01-16 14:43:50.938431 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-01-16 14:43:50.938473 | orchestrator | Thursday 16 January 2025 14:43:50 +0000 (0:00:00.632) 0:00:15.410 ****** 2025-01-16 14:43:50.938509 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDr3CfEnGwBxBjZ8WaxxLgeH1HWa3yMAfNU5xLSSlXU8) 2025-01-16 14:43:50.938698 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSAS+NG1E2GmDcnKP3+xLTXHr/RAKS8jjMvJhjIY/XM8azvKQTWxF9D6xPlWgZgtnLn6Qms8iaemRt0xYjRq6uk/iiWkeAJe44OqCnEVl5SbshZsUM2KzBx5bW8Z9baCF3dD71PGAo6wJv0cRKBix8R0csJLiG1O9Wv6jYn14ZStVTLWxkXFWXIVBokqt00jR8wEqVY8RUoZwbXXc5ndgr03rlalJ7UkCHGUOuQvQ8wYprv+LYkccfzU4f4ISuD/vw3zYgU+vEonE3GKzf56Np2PlcBmYpQSOTHj+7BoweusiQmafqOsMZWRGMJUJz2Eh61JhTodJyVEU6nwzuSq3ZE3I+vH5V2OksruoNiCaIeySQRSW4aAqQFXOtAGaBnBqyfzSDk/xTlFmlJCbBIE6ItLQAVwAkoV1f4qdE1/xK4qwneulnAw3HZ38XrulJCOZoQNc1kFkkCL731YKmMymIj+BcB/606XvXFuOwxD8O97vAjXVAmo8zEGEM+A8VK+M=) 2025-01-16 14:43:50.938721 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN7Y+iR16UfFtEoMT8YivBjtQ75JenRuqpG/QTCzVzojxJNSc4r5/XfwfyWSrNJXtL39dLhl/opi0ZevgFndplk=) 2025-01-16 14:43:50.938737 | orchestrator | 2025-01-16 14:43:50.939194 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-01-16 14:43:50.939419 | orchestrator | Thursday 16 January 2025 14:43:50 +0000 (0:00:00.626) 0:00:16.037 ****** 2025-01-16 14:43:51.035974 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-01-16 14:43:51.036291 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-01-16 14:43:51.036330 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-01-16 14:43:51.036659 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-01-16 14:43:51.036849 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-01-16 14:43:51.037148 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-01-16 14:43:51.037516 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-01-16 14:43:51.037901 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:43:51.038140 | orchestrator | 2025-01-16 14:43:51.038347 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-01-16 14:43:51.038576 | orchestrator | Thursday 16 January 2025 14:43:51 +0000 (0:00:00.098) 0:00:16.136 ****** 2025-01-16 14:43:51.068157 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:43:51.068332 | orchestrator | 2025-01-16 14:43:51.068355 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-01-16 14:43:51.068376 | orchestrator | Thursday 16 January 2025 14:43:51 +0000 (0:00:00.032) 0:00:16.168 ****** 2025-01-16 14:43:51.103413 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:43:51.103568 | orchestrator | 2025-01-16 14:43:51.103591 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-01-16 14:43:51.103615 | orchestrator | Thursday 16 January 2025 14:43:51 +0000 (0:00:00.034) 0:00:16.203 ****** 2025-01-16 14:43:51.411291 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:51.411385 | orchestrator | 2025-01-16 14:43:51.411406 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:43:51.411677 | orchestrator | 2025-01-16 14:43:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:43:51.411875 | orchestrator | 2025-01-16 14:43:51 | INFO  | Please wait and do not abort execution. 2025-01-16 14:43:51.411895 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 14:43:51.412083 | orchestrator | 2025-01-16 14:43:51.412633 | orchestrator | 2025-01-16 14:43:51.412773 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:43:51.413057 | orchestrator | Thursday 16 January 2025 14:43:51 +0000 (0:00:00.307) 0:00:16.511 ****** 2025-01-16 14:43:51.413339 | orchestrator | =============================================================================== 2025-01-16 14:43:51.413567 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 3.70s 2025-01-16 14:43:51.413907 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 3.06s 2025-01-16 14:43:51.414080 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.72s 2025-01-16 14:43:51.414781 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.66s 2025-01-16 14:43:51.414937 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.64s 2025-01-16 14:43:51.414963 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.64s 2025-01-16 14:43:51.415154 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.64s 2025-01-16 14:43:51.415346 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.64s 2025-01-16 14:43:51.415607 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.64s 2025-01-16 14:43:51.415823 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.63s 2025-01-16 14:43:51.416022 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.63s 2025-01-16 14:43:51.416282 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.63s 2025-01-16 14:43:51.416508 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.63s 2025-01-16 14:43:51.416707 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.63s 2025-01-16 14:43:51.416825 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.63s 2025-01-16 14:43:51.417115 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.62s 2025-01-16 14:43:51.417331 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.31s 2025-01-16 14:43:51.417556 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.10s 2025-01-16 14:43:51.417708 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.10s 2025-01-16 14:43:51.417905 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.10s 2025-01-16 14:43:51.636409 | orchestrator | ++ semver latest 7.0.0 2025-01-16 14:43:51.654324 | orchestrator | + [[ -1 -ge 0 ]] 2025-01-16 14:43:52.625406 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-01-16 14:43:52.625574 | orchestrator | + osism apply nexus 2025-01-16 14:43:52.625612 | orchestrator | 2025-01-16 14:43:52 | INFO  | Task 0bb30fea-72ef-483a-9ea1-8a31bff9234e (nexus) was prepared for execution. 2025-01-16 14:43:54.714388 | orchestrator | 2025-01-16 14:43:52 | INFO  | It takes a moment until task 0bb30fea-72ef-483a-9ea1-8a31bff9234e (nexus) has been started and output is visible here. 2025-01-16 14:43:54.714582 | orchestrator | 2025-01-16 14:43:54.714713 | orchestrator | PLAY [Apply role nexus] ******************************************************** 2025-01-16 14:43:54.714736 | orchestrator | 2025-01-16 14:43:54.714752 | orchestrator | TASK [osism.services.nexus : Include config tasks] ***************************** 2025-01-16 14:43:54.714773 | orchestrator | Thursday 16 January 2025 14:43:54 +0000 (0:00:00.071) 0:00:00.071 ****** 2025-01-16 14:43:54.768615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/config.yml for testbed-manager 2025-01-16 14:43:54.768788 | orchestrator | 2025-01-16 14:43:54.768808 | orchestrator | TASK [osism.services.nexus : Create required directories] ********************** 2025-01-16 14:43:54.768825 | orchestrator | Thursday 16 January 2025 14:43:54 +0000 (0:00:00.056) 0:00:00.127 ****** 2025-01-16 14:43:55.275429 | orchestrator | changed: [testbed-manager] => (item=/opt/nexus) 2025-01-16 14:43:55.275817 | orchestrator | changed: [testbed-manager] => (item=/opt/nexus/configuration) 2025-01-16 14:43:55.275855 | orchestrator | 2025-01-16 14:43:55.275893 | orchestrator | TASK [osism.services.nexus : Set UID for nexus_configuration_directory] ******** 2025-01-16 14:43:55.507108 | orchestrator | Thursday 16 January 2025 14:43:55 +0000 (0:00:00.506) 0:00:00.634 ****** 2025-01-16 14:43:55.507237 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:56.542883 | orchestrator | 2025-01-16 14:43:56.543002 | orchestrator | TASK [osism.services.nexus : Copy configuration files] ************************* 2025-01-16 14:43:56.543021 | orchestrator | Thursday 16 January 2025 14:43:55 +0000 (0:00:00.232) 0:00:00.866 ****** 2025-01-16 14:43:56.543052 | orchestrator | changed: [testbed-manager] => (item=nexus.properties) 2025-01-16 14:43:56.596733 | orchestrator | changed: [testbed-manager] => (item=nexus.env) 2025-01-16 14:43:56.596866 | orchestrator | 2025-01-16 14:43:56.596892 | orchestrator | TASK [osism.services.nexus : Include service tasks] **************************** 2025-01-16 14:43:56.596914 | orchestrator | Thursday 16 January 2025 14:43:56 +0000 (0:00:01.034) 0:00:01.900 ****** 2025-01-16 14:43:56.596977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/service.yml for testbed-manager 2025-01-16 14:43:57.044868 | orchestrator | 2025-01-16 14:43:57.045048 | orchestrator | TASK [osism.services.nexus : Copy nexus systemd unit file] ********************* 2025-01-16 14:43:57.045064 | orchestrator | Thursday 16 January 2025 14:43:56 +0000 (0:00:00.055) 0:00:01.955 ****** 2025-01-16 14:43:57.045094 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:57.045407 | orchestrator | 2025-01-16 14:43:57.045468 | orchestrator | TASK [osism.services.nexus : Create traefik external network] ****************** 2025-01-16 14:43:57.045544 | orchestrator | Thursday 16 January 2025 14:43:57 +0000 (0:00:00.447) 0:00:02.403 ****** 2025-01-16 14:43:57.542428 | orchestrator | ok: [testbed-manager] 2025-01-16 14:43:58.095382 | orchestrator | 2025-01-16 14:43:58.095549 | orchestrator | TASK [osism.services.nexus : Copy docker-compose.yml file] ********************* 2025-01-16 14:43:58.095567 | orchestrator | Thursday 16 January 2025 14:43:57 +0000 (0:00:00.497) 0:00:02.901 ****** 2025-01-16 14:43:58.095620 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:58.096054 | orchestrator | 2025-01-16 14:43:58.096269 | orchestrator | TASK [osism.services.nexus : Stop and disable old service docker-compose@nexus] *** 2025-01-16 14:43:58.096299 | orchestrator | Thursday 16 January 2025 14:43:58 +0000 (0:00:00.551) 0:00:03.452 ****** 2025-01-16 14:43:58.688916 | orchestrator | ok: [testbed-manager] 2025-01-16 14:43:59.599323 | orchestrator | 2025-01-16 14:43:59.599530 | orchestrator | TASK [osism.services.nexus : Manage nexus service] ***************************** 2025-01-16 14:43:59.599556 | orchestrator | Thursday 16 January 2025 14:43:58 +0000 (0:00:00.594) 0:00:04.046 ****** 2025-01-16 14:43:59.599589 | orchestrator | changed: [testbed-manager] 2025-01-16 14:43:59.659781 | orchestrator | 2025-01-16 14:43:59.660037 | orchestrator | TASK [osism.services.nexus : Register that nexus service was started] ********** 2025-01-16 14:43:59.660185 | orchestrator | Thursday 16 January 2025 14:43:59 +0000 (0:00:00.910) 0:00:04.957 ****** 2025-01-16 14:43:59.660243 | orchestrator | ok: [testbed-manager] 2025-01-16 14:43:59.696107 | orchestrator | 2025-01-16 14:43:59.696225 | orchestrator | TASK [osism.services.nexus : Flush handlers] *********************************** 2025-01-16 14:43:59.696245 | orchestrator | Thursday 16 January 2025 14:43:59 +0000 (0:00:00.038) 0:00:04.996 ****** 2025-01-16 14:43:59.696260 | orchestrator | 2025-01-16 14:43:59.696274 | orchestrator | RUNNING HANDLER [osism.services.nexus : Restart nexus service] ***************** 2025-01-16 14:43:59.696288 | orchestrator | Thursday 16 January 2025 14:43:59 +0000 (0:00:00.021) 0:00:05.017 ****** 2025-01-16 14:43:59.696386 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:44:59.737960 | orchestrator | 2025-01-16 14:44:59.738126 | orchestrator | RUNNING HANDLER [osism.services.nexus : Wait for nexus service to start] ******* 2025-01-16 14:44:59.738147 | orchestrator | Thursday 16 January 2025 14:43:59 +0000 (0:00:00.037) 0:00:05.055 ****** 2025-01-16 14:44:59.738178 | orchestrator | Pausing for 60 seconds 2025-01-16 14:45:00.111611 | orchestrator | changed: [testbed-manager] 2025-01-16 14:45:00.111724 | orchestrator | 2025-01-16 14:45:00.111740 | orchestrator | RUNNING HANDLER [osism.services.nexus : Ensure that all containers are up] ***** 2025-01-16 14:45:00.111753 | orchestrator | Thursday 16 January 2025 14:44:59 +0000 (0:01:00.039) 0:01:05.094 ****** 2025-01-16 14:45:00.111780 | orchestrator | changed: [testbed-manager] 2025-01-16 14:45:00.414554 | orchestrator | 2025-01-16 14:45:00.414688 | orchestrator | RUNNING HANDLER [osism.services.nexus : Wait for an healthy nexus service] ***** 2025-01-16 14:45:00.414709 | orchestrator | Thursday 16 January 2025 14:45:00 +0000 (0:00:00.374) 0:01:05.469 ****** 2025-01-16 14:45:00.414743 | orchestrator | changed: [testbed-manager] 2025-01-16 14:45:00.496287 | orchestrator | 2025-01-16 14:45:00.496407 | orchestrator | TASK [osism.services.nexus : Include initialize tasks] ************************* 2025-01-16 14:45:00.496422 | orchestrator | Thursday 16 January 2025 14:45:00 +0000 (0:00:00.304) 0:01:05.773 ****** 2025-01-16 14:45:00.496447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/initialize.yml for testbed-manager 2025-01-16 14:45:00.498721 | orchestrator | 2025-01-16 14:45:00.498960 | orchestrator | TASK [osism.services.nexus : Get setup admin password] ************************* 2025-01-16 14:45:00.499106 | orchestrator | Thursday 16 January 2025 14:45:00 +0000 (0:00:00.079) 0:01:05.853 ****** 2025-01-16 14:45:01.125943 | orchestrator | changed: [testbed-manager] 2025-01-16 14:45:01.161291 | orchestrator | 2025-01-16 14:45:01.161438 | orchestrator | TASK [osism.services.nexus : Set setup admin password] ************************* 2025-01-16 14:45:01.161467 | orchestrator | Thursday 16 January 2025 14:45:01 +0000 (0:00:00.629) 0:01:06.483 ****** 2025-01-16 14:45:01.161562 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:04.058342 | orchestrator | 2025-01-16 14:45:04.058461 | orchestrator | TASK [osism.services.nexus : Provision scripts included in the container image] *** 2025-01-16 14:45:04.058543 | orchestrator | Thursday 16 January 2025 14:45:01 +0000 (0:00:00.037) 0:01:06.520 ****** 2025-01-16 14:45:04.058581 | orchestrator | changed: [testbed-manager] => (item=anonymous.json) 2025-01-16 14:45:04.130738 | orchestrator | changed: [testbed-manager] => (item=cleanup.json) 2025-01-16 14:45:04.130887 | orchestrator | 2025-01-16 14:45:04.130908 | orchestrator | TASK [osism.services.nexus : Provision scripts included in this ansible role] *** 2025-01-16 14:45:04.130924 | orchestrator | Thursday 16 January 2025 14:45:04 +0000 (0:00:02.895) 0:01:09.416 ****** 2025-01-16 14:45:04.130970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=create_repos_from_list) 2025-01-16 14:45:04.169017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=setup_http_proxy) 2025-01-16 14:45:04.169181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=setup_realms) 2025-01-16 14:45:04.169216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=update_admin_password) 2025-01-16 14:45:04.169242 | orchestrator | 2025-01-16 14:45:04.169268 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:04.169293 | orchestrator | Thursday 16 January 2025 14:45:04 +0000 (0:00:00.073) 0:01:09.489 ****** 2025-01-16 14:45:04.169336 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:04.253592 | orchestrator | 2025-01-16 14:45:04.253694 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:04.253707 | orchestrator | Thursday 16 January 2025 14:45:04 +0000 (0:00:00.038) 0:01:09.527 ****** 2025-01-16 14:45:04.253731 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:45:04.776938 | orchestrator | 2025-01-16 14:45:04.777065 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-01-16 14:45:04.777087 | orchestrator | Thursday 16 January 2025 14:45:04 +0000 (0:00:00.084) 0:01:09.612 ****** 2025-01-16 14:45:04.777118 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:05.159213 | orchestrator | 2025-01-16 14:45:05.159329 | orchestrator | TASK [osism.services.nexus : Deleting script create_repos_from_list] *********** 2025-01-16 14:45:05.159347 | orchestrator | Thursday 16 January 2025 14:45:04 +0000 (0:00:00.523) 0:01:10.135 ****** 2025-01-16 14:45:05.159375 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:05.540528 | orchestrator | 2025-01-16 14:45:05.540616 | orchestrator | TASK [osism.services.nexus : Declaring script create_repos_from_list] ********** 2025-01-16 14:45:05.540625 | orchestrator | Thursday 16 January 2025 14:45:05 +0000 (0:00:00.381) 0:01:10.517 ****** 2025-01-16 14:45:05.540641 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:05.577333 | orchestrator | 2025-01-16 14:45:05.577415 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:05.577423 | orchestrator | Thursday 16 January 2025 14:45:05 +0000 (0:00:00.382) 0:01:10.899 ****** 2025-01-16 14:45:05.577440 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:05.577697 | orchestrator | 2025-01-16 14:45:05.577712 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:05.577725 | orchestrator | Thursday 16 January 2025 14:45:05 +0000 (0:00:00.037) 0:01:10.936 ****** 2025-01-16 14:45:05.609411 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:45:05.996441 | orchestrator | 2025-01-16 14:45:05.996634 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-01-16 14:45:05.996657 | orchestrator | Thursday 16 January 2025 14:45:05 +0000 (0:00:00.031) 0:01:10.968 ****** 2025-01-16 14:45:05.996687 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:06.395286 | orchestrator | 2025-01-16 14:45:06.395514 | orchestrator | TASK [osism.services.nexus : Deleting script setup_http_proxy] ***************** 2025-01-16 14:45:06.395540 | orchestrator | Thursday 16 January 2025 14:45:05 +0000 (0:00:00.386) 0:01:11.354 ****** 2025-01-16 14:45:06.395569 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:06.783890 | orchestrator | 2025-01-16 14:45:06.784004 | orchestrator | TASK [osism.services.nexus : Declaring script setup_http_proxy] **************** 2025-01-16 14:45:06.784022 | orchestrator | Thursday 16 January 2025 14:45:06 +0000 (0:00:00.397) 0:01:11.752 ****** 2025-01-16 14:45:06.784078 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:06.823617 | orchestrator | 2025-01-16 14:45:06.823723 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:06.823742 | orchestrator | Thursday 16 January 2025 14:45:06 +0000 (0:00:00.389) 0:01:12.141 ****** 2025-01-16 14:45:06.823773 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:06.824551 | orchestrator | 2025-01-16 14:45:06.864321 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:06.864436 | orchestrator | Thursday 16 January 2025 14:45:06 +0000 (0:00:00.040) 0:01:12.182 ****** 2025-01-16 14:45:06.864469 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:45:06.864604 | orchestrator | 2025-01-16 14:45:06.864631 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-01-16 14:45:06.864905 | orchestrator | Thursday 16 January 2025 14:45:06 +0000 (0:00:00.040) 0:01:12.223 ****** 2025-01-16 14:45:07.232125 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:07.232247 | orchestrator | 2025-01-16 14:45:07.232267 | orchestrator | TASK [osism.services.nexus : Deleting script setup_realms] ********************* 2025-01-16 14:45:07.232290 | orchestrator | Thursday 16 January 2025 14:45:07 +0000 (0:00:00.365) 0:01:12.589 ****** 2025-01-16 14:45:07.610131 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:07.610309 | orchestrator | 2025-01-16 14:45:07.610439 | orchestrator | TASK [osism.services.nexus : Declaring script setup_realms] ******************** 2025-01-16 14:45:07.610464 | orchestrator | Thursday 16 January 2025 14:45:07 +0000 (0:00:00.379) 0:01:12.968 ****** 2025-01-16 14:45:07.986336 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:08.026770 | orchestrator | 2025-01-16 14:45:08.026887 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:08.026906 | orchestrator | Thursday 16 January 2025 14:45:07 +0000 (0:00:00.376) 0:01:13.345 ****** 2025-01-16 14:45:08.026958 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:08.059676 | orchestrator | 2025-01-16 14:45:08.059784 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:08.059802 | orchestrator | Thursday 16 January 2025 14:45:08 +0000 (0:00:00.040) 0:01:13.385 ****** 2025-01-16 14:45:08.059831 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:45:08.433602 | orchestrator | 2025-01-16 14:45:08.434306 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-01-16 14:45:08.434349 | orchestrator | Thursday 16 January 2025 14:45:08 +0000 (0:00:00.032) 0:01:13.418 ****** 2025-01-16 14:45:08.434379 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:08.806843 | orchestrator | 2025-01-16 14:45:08.806966 | orchestrator | TASK [osism.services.nexus : Deleting script update_admin_password] ************ 2025-01-16 14:45:08.806986 | orchestrator | Thursday 16 January 2025 14:45:08 +0000 (0:00:00.371) 0:01:13.790 ****** 2025-01-16 14:45:08.807020 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:09.181275 | orchestrator | 2025-01-16 14:45:09.181362 | orchestrator | TASK [osism.services.nexus : Declaring script update_admin_password] *********** 2025-01-16 14:45:09.181371 | orchestrator | Thursday 16 January 2025 14:45:08 +0000 (0:00:00.374) 0:01:14.165 ****** 2025-01-16 14:45:09.181386 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:09.182664 | orchestrator | 2025-01-16 14:45:09.182756 | orchestrator | TASK [osism.services.nexus : Set admin password] ******************************* 2025-01-16 14:45:09.182788 | orchestrator | Thursday 16 January 2025 14:45:09 +0000 (0:00:00.374) 0:01:14.540 ****** 2025-01-16 14:45:09.248621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-01-16 14:45:09.291560 | orchestrator | 2025-01-16 14:45:09.291679 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:09.291699 | orchestrator | Thursday 16 January 2025 14:45:09 +0000 (0:00:00.067) 0:01:14.607 ****** 2025-01-16 14:45:09.291731 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:09.292465 | orchestrator | 2025-01-16 14:45:09.326637 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:09.326777 | orchestrator | Thursday 16 January 2025 14:45:09 +0000 (0:00:00.043) 0:01:14.650 ****** 2025-01-16 14:45:09.326812 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:45:09.711360 | orchestrator | 2025-01-16 14:45:09.711528 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-01-16 14:45:09.711552 | orchestrator | Thursday 16 January 2025 14:45:09 +0000 (0:00:00.034) 0:01:14.685 ****** 2025-01-16 14:45:09.711585 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:10.859999 | orchestrator | 2025-01-16 14:45:10.860133 | orchestrator | TASK [osism.services.nexus : Calling script update_admin_password] ************* 2025-01-16 14:45:10.860153 | orchestrator | Thursday 16 January 2025 14:45:09 +0000 (0:00:00.384) 0:01:15.070 ****** 2025-01-16 14:45:10.860186 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:10.897369 | orchestrator | 2025-01-16 14:45:10.897470 | orchestrator | TASK [osism.services.nexus : Set new admin password] *************************** 2025-01-16 14:45:10.897547 | orchestrator | Thursday 16 January 2025 14:45:10 +0000 (0:00:01.143) 0:01:16.213 ****** 2025-01-16 14:45:10.897577 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:12.506371 | orchestrator | 2025-01-16 14:45:12.506616 | orchestrator | TASK [osism.services.nexus : Allow anonymous access] *************************** 2025-01-16 14:45:12.506652 | orchestrator | Thursday 16 January 2025 14:45:10 +0000 (0:00:00.042) 0:01:16.255 ****** 2025-01-16 14:45:12.506698 | orchestrator | changed: [testbed-manager] 2025-01-16 14:45:14.061668 | orchestrator | 2025-01-16 14:45:14.061812 | orchestrator | TASK [osism.services.nexus : Cleanup default repositories] ********************* 2025-01-16 14:45:14.061832 | orchestrator | Thursday 16 January 2025 14:45:12 +0000 (0:00:01.607) 0:01:17.862 ****** 2025-01-16 14:45:14.061864 | orchestrator | changed: [testbed-manager] 2025-01-16 14:45:14.127300 | orchestrator | 2025-01-16 14:45:14.127416 | orchestrator | TASK [osism.services.nexus : Setup http proxy] ********************************* 2025-01-16 14:45:14.127435 | orchestrator | Thursday 16 January 2025 14:45:14 +0000 (0:00:01.553) 0:01:19.415 ****** 2025-01-16 14:45:14.127468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-01-16 14:45:14.168387 | orchestrator | 2025-01-16 14:45:14.168572 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:14.168594 | orchestrator | Thursday 16 January 2025 14:45:14 +0000 (0:00:00.070) 0:01:19.486 ****** 2025-01-16 14:45:14.168627 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:14.205868 | orchestrator | 2025-01-16 14:45:14.206098 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:14.206143 | orchestrator | Thursday 16 January 2025 14:45:14 +0000 (0:00:00.041) 0:01:19.527 ****** 2025-01-16 14:45:14.206194 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:45:14.600188 | orchestrator | 2025-01-16 14:45:14.600310 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-01-16 14:45:14.600331 | orchestrator | Thursday 16 January 2025 14:45:14 +0000 (0:00:00.037) 0:01:19.564 ****** 2025-01-16 14:45:14.600364 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:14.600686 | orchestrator | 2025-01-16 14:45:14.600722 | orchestrator | TASK [osism.services.nexus : Calling script setup_http_proxy] ****************** 2025-01-16 14:45:15.228570 | orchestrator | Thursday 16 January 2025 14:45:14 +0000 (0:00:00.393) 0:01:19.958 ****** 2025-01-16 14:45:15.228716 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:15.299104 | orchestrator | 2025-01-16 14:45:15.299199 | orchestrator | TASK [osism.services.nexus : Setup realms] ************************************* 2025-01-16 14:45:15.299213 | orchestrator | Thursday 16 January 2025 14:45:15 +0000 (0:00:00.627) 0:01:20.586 ****** 2025-01-16 14:45:15.299238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-01-16 14:45:15.299774 | orchestrator | 2025-01-16 14:45:15.299868 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:15.299898 | orchestrator | Thursday 16 January 2025 14:45:15 +0000 (0:00:00.072) 0:01:20.658 ****** 2025-01-16 14:45:15.403605 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:15.403818 | orchestrator | 2025-01-16 14:45:15.403844 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:15.403868 | orchestrator | Thursday 16 January 2025 14:45:15 +0000 (0:00:00.103) 0:01:20.761 ****** 2025-01-16 14:45:15.439893 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:45:15.441726 | orchestrator | 2025-01-16 14:45:15.851841 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-01-16 14:45:15.852002 | orchestrator | Thursday 16 January 2025 14:45:15 +0000 (0:00:00.037) 0:01:20.799 ****** 2025-01-16 14:45:15.852059 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:16.525903 | orchestrator | 2025-01-16 14:45:16.526005 | orchestrator | TASK [osism.services.nexus : Calling script setup_realms] ********************** 2025-01-16 14:45:16.526059 | orchestrator | Thursday 16 January 2025 14:45:15 +0000 (0:00:00.409) 0:01:21.208 ****** 2025-01-16 14:45:16.526084 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:16.570188 | orchestrator | 2025-01-16 14:45:16.570302 | orchestrator | TASK [osism.services.nexus : Apply defaults to docker proxy repos] ************* 2025-01-16 14:45:16.570322 | orchestrator | Thursday 16 January 2025 14:45:16 +0000 (0:00:00.675) 0:01:21.884 ****** 2025-01-16 14:45:16.570355 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:16.615123 | orchestrator | 2025-01-16 14:45:16.615237 | orchestrator | TASK [osism.services.nexus : Add docker repositories to global repos list] ***** 2025-01-16 14:45:16.615266 | orchestrator | Thursday 16 January 2025 14:45:16 +0000 (0:00:00.042) 0:01:21.927 ****** 2025-01-16 14:45:16.615288 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:16.619680 | orchestrator | 2025-01-16 14:45:16.661953 | orchestrator | TASK [osism.services.nexus : Apply defaults to apt proxy repos] **************** 2025-01-16 14:45:16.662124 | orchestrator | Thursday 16 January 2025 14:45:16 +0000 (0:00:00.046) 0:01:21.973 ****** 2025-01-16 14:45:16.662160 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:16.663812 | orchestrator | 2025-01-16 14:45:16.663857 | orchestrator | TASK [osism.services.nexus : Add apt repositories to global repos list] ******** 2025-01-16 14:45:16.707173 | orchestrator | Thursday 16 January 2025 14:45:16 +0000 (0:00:00.047) 0:01:22.021 ****** 2025-01-16 14:45:16.707294 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:16.707554 | orchestrator | 2025-01-16 14:45:16.707588 | orchestrator | TASK [osism.services.nexus : Create configured repositories] ******************* 2025-01-16 14:45:16.707609 | orchestrator | Thursday 16 January 2025 14:45:16 +0000 (0:00:00.045) 0:01:22.066 ****** 2025-01-16 14:45:16.761858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-01-16 14:45:16.807350 | orchestrator | 2025-01-16 14:45:16.807576 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:16.807613 | orchestrator | Thursday 16 January 2025 14:45:16 +0000 (0:00:00.054) 0:01:22.120 ****** 2025-01-16 14:45:16.807652 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:16.840194 | orchestrator | 2025-01-16 14:45:16.840290 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-01-16 14:45:16.840301 | orchestrator | Thursday 16 January 2025 14:45:16 +0000 (0:00:00.043) 0:01:22.164 ****** 2025-01-16 14:45:16.840324 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:45:16.844841 | orchestrator | 2025-01-16 14:45:16.844897 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-01-16 14:45:16.844912 | orchestrator | Thursday 16 January 2025 14:45:16 +0000 (0:00:00.034) 0:01:22.199 ****** 2025-01-16 14:45:17.226132 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:19.102915 | orchestrator | 2025-01-16 14:45:19.103776 | orchestrator | TASK [osism.services.nexus : Calling script create_repos_from_list] ************ 2025-01-16 14:45:19.103806 | orchestrator | Thursday 16 January 2025 14:45:17 +0000 (0:00:00.385) 0:01:22.584 ****** 2025-01-16 14:45:19.103832 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:19.163197 | orchestrator | 2025-01-16 14:45:19.163364 | orchestrator | TASK [Set osism.nexus.status fact] ********************************************* 2025-01-16 14:45:19.163424 | orchestrator | Thursday 16 January 2025 14:45:19 +0000 (0:00:01.874) 0:01:24.459 ****** 2025-01-16 14:45:19.163542 | orchestrator | included: osism.commons.state for testbed-manager 2025-01-16 14:45:19.405245 | orchestrator | 2025-01-16 14:45:19.405606 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-01-16 14:45:19.405758 | orchestrator | Thursday 16 January 2025 14:45:19 +0000 (0:00:00.062) 0:01:24.522 ****** 2025-01-16 14:45:19.405806 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:19.733030 | orchestrator | 2025-01-16 14:45:19.733134 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-01-16 14:45:19.733155 | orchestrator | Thursday 16 January 2025 14:45:19 +0000 (0:00:00.240) 0:01:24.762 ****** 2025-01-16 14:45:19.733190 | orchestrator | changed: [testbed-manager] 2025-01-16 14:45:19.733336 | orchestrator | 2025-01-16 14:45:19.733362 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:45:19.733380 | orchestrator | 2025-01-16 14:45:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:45:19.733398 | orchestrator | 2025-01-16 14:45:19 | INFO  | Please wait and do not abort execution. 2025-01-16 14:45:19.733421 | orchestrator | testbed-manager : ok=64  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-01-16 14:45:19.733712 | orchestrator | 2025-01-16 14:45:19.734104 | orchestrator | 2025-01-16 14:45:19.734345 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:45:19.734725 | orchestrator | Thursday 16 January 2025 14:45:19 +0000 (0:00:00.328) 0:01:25.091 ****** 2025-01-16 14:45:19.735026 | orchestrator | =============================================================================== 2025-01-16 14:45:19.735303 | orchestrator | osism.services.nexus : Wait for nexus service to start ----------------- 60.04s 2025-01-16 14:45:19.735662 | orchestrator | osism.services.nexus : Provision scripts included in the container image --- 2.90s 2025-01-16 14:45:19.735845 | orchestrator | osism.services.nexus : Calling script create_repos_from_list ------------ 1.87s 2025-01-16 14:45:19.736026 | orchestrator | osism.services.nexus : Allow anonymous access --------------------------- 1.61s 2025-01-16 14:45:19.736337 | orchestrator | osism.services.nexus : Cleanup default repositories --------------------- 1.55s 2025-01-16 14:45:19.736732 | orchestrator | osism.services.nexus : Calling script update_admin_password ------------- 1.14s 2025-01-16 14:45:19.737008 | orchestrator | osism.services.nexus : Copy configuration files ------------------------- 1.03s 2025-01-16 14:45:19.737045 | orchestrator | osism.services.nexus : Manage nexus service ----------------------------- 0.91s 2025-01-16 14:45:19.737149 | orchestrator | osism.services.nexus : Calling script setup_realms ---------------------- 0.68s 2025-01-16 14:45:19.737396 | orchestrator | osism.services.nexus : Get setup admin password ------------------------- 0.63s 2025-01-16 14:45:19.737833 | orchestrator | osism.services.nexus : Calling script setup_http_proxy ------------------ 0.63s 2025-01-16 14:45:19.738131 | orchestrator | osism.services.nexus : Stop and disable old service docker-compose@nexus --- 0.59s 2025-01-16 14:45:19.738167 | orchestrator | osism.services.nexus : Copy docker-compose.yml file --------------------- 0.55s 2025-01-16 14:45:19.738405 | orchestrator | osism.services.nexus : Wait for nexus ----------------------------------- 0.52s 2025-01-16 14:45:19.738644 | orchestrator | osism.services.nexus : Create required directories ---------------------- 0.51s 2025-01-16 14:45:19.738745 | orchestrator | osism.services.nexus : Create traefik external network ------------------ 0.50s 2025-01-16 14:45:19.738997 | orchestrator | osism.services.nexus : Copy nexus systemd unit file --------------------- 0.45s 2025-01-16 14:45:19.739205 | orchestrator | osism.services.nexus : Wait for nexus ----------------------------------- 0.41s 2025-01-16 14:45:19.739461 | orchestrator | osism.services.nexus : Deleting script setup_http_proxy ----------------- 0.40s 2025-01-16 14:45:19.740017 | orchestrator | osism.services.nexus : Wait for nexus ----------------------------------- 0.39s 2025-01-16 14:45:19.948556 | orchestrator | + [[ true == \t\r\u\e ]] 2025-01-16 14:45:19.952127 | orchestrator | + sh -c '/opt/configuration/scripts/set-docker-registry.sh nexus.testbed.osism.xyz:8193' 2025-01-16 14:45:19.952334 | orchestrator | + set -e 2025-01-16 14:45:19.953319 | orchestrator | + source /opt/manager-vars.sh 2025-01-16 14:45:19.953376 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-01-16 14:45:19.953391 | orchestrator | ++ NUMBER_OF_NODES=6 2025-01-16 14:45:19.953405 | orchestrator | ++ export CEPH_VERSION=quincy 2025-01-16 14:45:19.953419 | orchestrator | ++ CEPH_VERSION=quincy 2025-01-16 14:45:19.953434 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-01-16 14:45:19.953451 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-01-16 14:45:19.953474 | orchestrator | ++ export MANAGER_VERSION=latest 2025-01-16 14:45:19.953517 | orchestrator | ++ MANAGER_VERSION=latest 2025-01-16 14:45:19.953541 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-01-16 14:45:19.953566 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-01-16 14:45:19.953590 | orchestrator | ++ export ARA=false 2025-01-16 14:45:19.953612 | orchestrator | ++ ARA=false 2025-01-16 14:45:19.953627 | orchestrator | ++ export TEMPEST=false 2025-01-16 14:45:19.953640 | orchestrator | ++ TEMPEST=false 2025-01-16 14:45:19.953654 | orchestrator | ++ export IS_ZUUL=true 2025-01-16 14:45:19.953668 | orchestrator | ++ IS_ZUUL=true 2025-01-16 14:45:19.953682 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 14:45:19.953695 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 14:45:19.953709 | orchestrator | ++ export EXTERNAL_API=false 2025-01-16 14:45:19.953731 | orchestrator | ++ EXTERNAL_API=false 2025-01-16 14:45:19.953778 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-01-16 14:45:19.953793 | orchestrator | ++ IMAGE_USER=ubuntu 2025-01-16 14:45:19.953807 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-01-16 14:45:19.953821 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-01-16 14:45:19.953835 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-01-16 14:45:19.953848 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-01-16 14:45:19.953862 | orchestrator | + DOCKER_REGISTRY=nexus.testbed.osism.xyz:8193 2025-01-16 14:45:19.953876 | orchestrator | + sed -i 's#ceph_docker_registry: .*#ceph_docker_registry: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-01-16 14:45:19.953926 | orchestrator | + sed -i 's#docker_registry_ansible: .*#docker_registry_ansible: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-01-16 14:45:19.954716 | orchestrator | + sed -i 's#docker_registry_kolla: .*#docker_registry_kolla: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-01-16 14:45:19.956102 | orchestrator | + sed -i 's#docker_registry_netbox: .*#docker_registry_netbox: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-01-16 14:45:19.957469 | orchestrator | + [[ nexus.testbed.osism.xyz:8193 == \o\s\i\s\m\.\h\a\r\b\o\r\.\r\e\g\i\o\.\d\i\g\i\t\a\l ]] 2025-01-16 14:45:19.957679 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-01-16 14:45:19.959454 | orchestrator | + sed -i 's/docker_namespace: osism/docker_namespace: kolla/' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-01-16 14:45:19.959528 | orchestrator | + osism apply squid 2025-01-16 14:45:20.985449 | orchestrator | 2025-01-16 14:45:20 | INFO  | Task 64e08084-5f5a-410b-95df-f6e92186d7d7 (squid) was prepared for execution. 2025-01-16 14:45:23.214472 | orchestrator | 2025-01-16 14:45:20 | INFO  | It takes a moment until task 64e08084-5f5a-410b-95df-f6e92186d7d7 (squid) has been started and output is visible here. 2025-01-16 14:45:23.214687 | orchestrator | 2025-01-16 14:45:23.267885 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-01-16 14:45:23.267995 | orchestrator | 2025-01-16 14:45:23.268029 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-01-16 14:45:23.268042 | orchestrator | Thursday 16 January 2025 14:45:23 +0000 (0:00:00.074) 0:00:00.074 ****** 2025-01-16 14:45:23.268071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-01-16 14:45:23.268872 | orchestrator | 2025-01-16 14:45:23.268922 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-01-16 14:45:24.157167 | orchestrator | Thursday 16 January 2025 14:45:23 +0000 (0:00:00.058) 0:00:00.133 ****** 2025-01-16 14:45:24.158237 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:24.872218 | orchestrator | 2025-01-16 14:45:24.872345 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-01-16 14:45:24.872365 | orchestrator | Thursday 16 January 2025 14:45:24 +0000 (0:00:00.887) 0:00:01.020 ****** 2025-01-16 14:45:24.872397 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-01-16 14:45:24.875700 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-01-16 14:45:24.875789 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-01-16 14:45:24.875809 | orchestrator | 2025-01-16 14:45:24.875825 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-01-16 14:45:24.875853 | orchestrator | Thursday 16 January 2025 14:45:24 +0000 (0:00:00.716) 0:00:01.736 ****** 2025-01-16 14:45:25.511262 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-01-16 14:45:25.728156 | orchestrator | 2025-01-16 14:45:25.728263 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-01-16 14:45:25.728272 | orchestrator | Thursday 16 January 2025 14:45:25 +0000 (0:00:00.635) 0:00:02.372 ****** 2025-01-16 14:45:25.728290 | orchestrator | ok: [testbed-manager] 2025-01-16 14:45:26.311854 | orchestrator | 2025-01-16 14:45:26.311979 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-01-16 14:45:26.311999 | orchestrator | Thursday 16 January 2025 14:45:25 +0000 (0:00:00.220) 0:00:02.593 ****** 2025-01-16 14:45:26.312031 | orchestrator | changed: [testbed-manager] 2025-01-16 14:45:26.314906 | orchestrator | 2025-01-16 14:45:52.874675 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-01-16 14:45:52.874797 | orchestrator | Thursday 16 January 2025 14:45:26 +0000 (0:00:00.582) 0:00:03.176 ****** 2025-01-16 14:45:52.874826 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-01-16 14:46:04.669149 | orchestrator | ok: [testbed-manager] 2025-01-16 14:46:04.669249 | orchestrator | 2025-01-16 14:46:04.669258 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-01-16 14:46:04.669265 | orchestrator | Thursday 16 January 2025 14:45:52 +0000 (0:00:26.560) 0:00:29.736 ****** 2025-01-16 14:46:04.669281 | orchestrator | changed: [testbed-manager] 2025-01-16 14:47:04.707630 | orchestrator | 2025-01-16 14:47:04.707805 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-01-16 14:47:04.707846 | orchestrator | Thursday 16 January 2025 14:46:04 +0000 (0:00:11.794) 0:00:41.530 ****** 2025-01-16 14:47:04.707896 | orchestrator | Pausing for 60 seconds 2025-01-16 14:47:04.741294 | orchestrator | changed: [testbed-manager] 2025-01-16 14:47:04.741523 | orchestrator | 2025-01-16 14:47:04.741628 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-01-16 14:47:04.741659 | orchestrator | Thursday 16 January 2025 14:47:04 +0000 (0:01:00.039) 0:01:41.570 ****** 2025-01-16 14:47:04.741710 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:05.089794 | orchestrator | 2025-01-16 14:47:05.089906 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-01-16 14:47:05.089923 | orchestrator | Thursday 16 January 2025 14:47:04 +0000 (0:00:00.036) 0:01:41.606 ****** 2025-01-16 14:47:05.089951 | orchestrator | changed: [testbed-manager] 2025-01-16 14:47:05.090394 | orchestrator | 2025-01-16 14:47:05.090427 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:47:05.090448 | orchestrator | 2025-01-16 14:47:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:47:05.090471 | orchestrator | 2025-01-16 14:47:05 | INFO  | Please wait and do not abort execution. 2025-01-16 14:47:05.090715 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:47:05.090943 | orchestrator | 2025-01-16 14:47:05.090965 | orchestrator | 2025-01-16 14:47:05.090983 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:47:05.091359 | orchestrator | Thursday 16 January 2025 14:47:05 +0000 (0:00:00.347) 0:01:41.954 ****** 2025-01-16 14:47:05.091410 | orchestrator | =============================================================================== 2025-01-16 14:47:05.091592 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.04s 2025-01-16 14:47:05.091782 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 26.56s 2025-01-16 14:47:05.091927 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.79s 2025-01-16 14:47:05.092151 | orchestrator | osism.services.squid : Install required packages ------------------------ 0.89s 2025-01-16 14:47:05.092401 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.72s 2025-01-16 14:47:05.092648 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.64s 2025-01-16 14:47:05.092806 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.58s 2025-01-16 14:47:05.093041 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.35s 2025-01-16 14:47:05.093713 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.22s 2025-01-16 14:47:05.093875 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.06s 2025-01-16 14:47:05.093899 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.04s 2025-01-16 14:47:05.315645 | orchestrator | + rm -f /opt/configuration/environments/kolla/files/overlays/horizon/_9999-custom-settings.py 2025-01-16 14:47:05.318151 | orchestrator | + rm -f /opt/configuration/environments/kolla/files/overlays/horizon/custom_local_settings 2025-01-16 14:47:05.318952 | orchestrator | + rm -f /opt/configuration/environments/kolla/files/overlays/keystone/wsgi-keystone.conf 2025-01-16 14:47:05.320444 | orchestrator | + rm -f /opt/configuration/environments/kolla/group_vars/keystone.yml 2025-01-16 14:47:05.321644 | orchestrator | + rm -rf /opt/configuration/environments/kolla/files/overlays/keystone/federation 2025-01-16 14:47:05.323975 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-01-16 14:47:06.334821 | orchestrator | 2025-01-16 14:47:06 | INFO  | Task 94c04e50-f3f6-4199-8ee5-94f6cc7d7a0f (operator) was prepared for execution. 2025-01-16 14:47:08.443963 | orchestrator | 2025-01-16 14:47:06 | INFO  | It takes a moment until task 94c04e50-f3f6-4199-8ee5-94f6cc7d7a0f (operator) has been started and output is visible here. 2025-01-16 14:47:08.444062 | orchestrator | 2025-01-16 14:47:08.444771 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-01-16 14:47:10.506926 | orchestrator | 2025-01-16 14:47:10.507050 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 14:47:10.507066 | orchestrator | Thursday 16 January 2025 14:47:08 +0000 (0:00:00.059) 0:00:00.059 ****** 2025-01-16 14:47:10.507094 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:10.508177 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:10.508292 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:10.508426 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:10.508449 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:10.508479 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:10.508652 | orchestrator | 2025-01-16 14:47:10.508683 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-01-16 14:47:10.508783 | orchestrator | Thursday 16 January 2025 14:47:10 +0000 (0:00:02.065) 0:00:02.125 ****** 2025-01-16 14:47:10.951772 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:10.951906 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:10.951919 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:10.951962 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:10.951975 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:10.952909 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:10.952957 | orchestrator | 2025-01-16 14:47:10.993912 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-01-16 14:47:10.994085 | orchestrator | 2025-01-16 14:47:10.994100 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-01-16 14:47:10.994112 | orchestrator | Thursday 16 January 2025 14:47:10 +0000 (0:00:00.445) 0:00:02.570 ****** 2025-01-16 14:47:10.994165 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:11.011303 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:11.033575 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:11.062399 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:11.062913 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:11.062953 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:11.063234 | orchestrator | 2025-01-16 14:47:11.063758 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-01-16 14:47:11.063987 | orchestrator | Thursday 16 January 2025 14:47:11 +0000 (0:00:00.110) 0:00:02.680 ****** 2025-01-16 14:47:11.105402 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:11.117190 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:11.132681 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:11.160433 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:11.162814 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:11.162923 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:11.508116 | orchestrator | 2025-01-16 14:47:11.508344 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-01-16 14:47:11.508378 | orchestrator | Thursday 16 January 2025 14:47:11 +0000 (0:00:00.097) 0:00:02.778 ****** 2025-01-16 14:47:11.508418 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:11.508715 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:11.508746 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:11.508771 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:11.509034 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:11.509191 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:11.509591 | orchestrator | 2025-01-16 14:47:11.509855 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-01-16 14:47:11.510121 | orchestrator | Thursday 16 January 2025 14:47:11 +0000 (0:00:00.347) 0:00:03.125 ****** 2025-01-16 14:47:11.961462 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:11.962518 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:11.962634 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:11.962654 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:11.962671 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:11.962688 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:11.962710 | orchestrator | 2025-01-16 14:47:11.962794 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-01-16 14:47:11.962945 | orchestrator | Thursday 16 January 2025 14:47:11 +0000 (0:00:00.454) 0:00:03.579 ****** 2025-01-16 14:47:12.625949 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-01-16 14:47:12.627292 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-01-16 14:47:13.329781 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-01-16 14:47:13.329988 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-01-16 14:47:13.330011 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-01-16 14:47:13.330096 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-01-16 14:47:13.330110 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-01-16 14:47:13.330123 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-01-16 14:47:13.330137 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-01-16 14:47:13.330151 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-01-16 14:47:13.330164 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-01-16 14:47:13.330178 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-01-16 14:47:13.330191 | orchestrator | 2025-01-16 14:47:13.330203 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-01-16 14:47:13.330213 | orchestrator | Thursday 16 January 2025 14:47:12 +0000 (0:00:00.662) 0:00:04.242 ****** 2025-01-16 14:47:13.330236 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:13.330330 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:13.330341 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:13.330352 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:13.330519 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:13.330725 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:13.330955 | orchestrator | 2025-01-16 14:47:13.331153 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-01-16 14:47:13.331477 | orchestrator | Thursday 16 January 2025 14:47:13 +0000 (0:00:00.705) 0:00:04.948 ****** 2025-01-16 14:47:14.031902 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-01-16 14:47:14.033463 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-01-16 14:47:14.033507 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-01-16 14:47:14.123904 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-01-16 14:47:14.124023 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-01-16 14:47:14.124036 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-01-16 14:47:14.124615 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-01-16 14:47:14.125338 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-01-16 14:47:14.125454 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-01-16 14:47:14.125475 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-01-16 14:47:14.125674 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-01-16 14:47:14.125694 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-01-16 14:47:14.125702 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-01-16 14:47:14.125710 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-01-16 14:47:14.125717 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-01-16 14:47:14.125724 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-01-16 14:47:14.125736 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-01-16 14:47:14.126480 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-01-16 14:47:14.126682 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-01-16 14:47:14.126885 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-01-16 14:47:14.127094 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-01-16 14:47:14.127296 | orchestrator | 2025-01-16 14:47:14.127568 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-01-16 14:47:14.127756 | orchestrator | Thursday 16 January 2025 14:47:14 +0000 (0:00:00.793) 0:00:05.741 ****** 2025-01-16 14:47:14.472516 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:14.472965 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:14.472999 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:14.473015 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:14.473542 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:14.473595 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:14.473613 | orchestrator | 2025-01-16 14:47:14.515254 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-01-16 14:47:14.515342 | orchestrator | Thursday 16 January 2025 14:47:14 +0000 (0:00:00.348) 0:00:06.090 ****** 2025-01-16 14:47:14.515364 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:14.529272 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:14.544902 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:14.569259 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:14.569524 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:14.569600 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:14.569684 | orchestrator | 2025-01-16 14:47:14.569702 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-01-16 14:47:15.097832 | orchestrator | Thursday 16 January 2025 14:47:14 +0000 (0:00:00.096) 0:00:06.187 ****** 2025-01-16 14:47:15.098464 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-01-16 14:47:15.098656 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:15.098671 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-01-16 14:47:15.098676 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:15.098682 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-01-16 14:47:15.098689 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:15.098696 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-01-16 14:47:15.098704 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:15.098709 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-01-16 14:47:15.098714 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:15.098719 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-01-16 14:47:15.098724 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:15.098731 | orchestrator | 2025-01-16 14:47:15.098739 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-01-16 14:47:15.098751 | orchestrator | Thursday 16 January 2025 14:47:15 +0000 (0:00:00.527) 0:00:06.714 ****** 2025-01-16 14:47:15.121169 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:15.133126 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:15.148056 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:15.160769 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:15.179147 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:15.179343 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:15.179378 | orchestrator | 2025-01-16 14:47:15.179401 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-01-16 14:47:15.179696 | orchestrator | Thursday 16 January 2025 14:47:15 +0000 (0:00:00.082) 0:00:06.797 ****** 2025-01-16 14:47:15.204369 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:15.216371 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:15.228273 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:15.242705 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:15.260260 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:15.260439 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:15.260459 | orchestrator | 2025-01-16 14:47:15.260479 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-01-16 14:47:15.260646 | orchestrator | Thursday 16 January 2025 14:47:15 +0000 (0:00:00.080) 0:00:06.878 ****** 2025-01-16 14:47:15.285827 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:15.298132 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:15.310130 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:15.323727 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:15.340512 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:15.340825 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:15.340870 | orchestrator | 2025-01-16 14:47:15.340901 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-01-16 14:47:15.341028 | orchestrator | Thursday 16 January 2025 14:47:15 +0000 (0:00:00.080) 0:00:06.959 ****** 2025-01-16 14:47:15.709471 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:15.711596 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:15.748503 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:15.748677 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:15.748697 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:15.748713 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:15.748727 | orchestrator | 2025-01-16 14:47:15.748743 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-01-16 14:47:15.748758 | orchestrator | Thursday 16 January 2025 14:47:15 +0000 (0:00:00.368) 0:00:07.327 ****** 2025-01-16 14:47:15.748787 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:15.766437 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:15.773977 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:15.836428 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:15.836736 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:15.836760 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:15.836772 | orchestrator | 2025-01-16 14:47:15.836815 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:47:15.837106 | orchestrator | 2025-01-16 14:47:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:47:15.837142 | orchestrator | 2025-01-16 14:47:15 | INFO  | Please wait and do not abort execution. 2025-01-16 14:47:15.837169 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 14:47:15.837384 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 14:47:15.837416 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 14:47:15.837662 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 14:47:15.837831 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 14:47:15.837998 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 14:47:15.838246 | orchestrator | 2025-01-16 14:47:15.838395 | orchestrator | 2025-01-16 14:47:15.838739 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:47:15.838854 | orchestrator | Thursday 16 January 2025 14:47:15 +0000 (0:00:00.127) 0:00:07.455 ****** 2025-01-16 14:47:15.839003 | orchestrator | =============================================================================== 2025-01-16 14:47:15.839131 | orchestrator | Gathering Facts --------------------------------------------------------- 2.07s 2025-01-16 14:47:15.839358 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 0.79s 2025-01-16 14:47:15.839493 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 0.71s 2025-01-16 14:47:15.839869 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 0.66s 2025-01-16 14:47:15.840063 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.53s 2025-01-16 14:47:15.840088 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.45s 2025-01-16 14:47:15.840232 | orchestrator | Do not require tty for all users ---------------------------------------- 0.45s 2025-01-16 14:47:15.840417 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.37s 2025-01-16 14:47:15.840537 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.35s 2025-01-16 14:47:15.840734 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.35s 2025-01-16 14:47:15.840904 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.13s 2025-01-16 14:47:15.841054 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.11s 2025-01-16 14:47:15.841150 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.10s 2025-01-16 14:47:15.841359 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.10s 2025-01-16 14:47:15.841607 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.08s 2025-01-16 14:47:15.841705 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.08s 2025-01-16 14:47:15.842185 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.08s 2025-01-16 14:47:16.047819 | orchestrator | + osism apply --environment custom facts 2025-01-16 14:47:16.964273 | orchestrator | 2025-01-16 14:47:16 | INFO  | Trying to run play facts in environment custom 2025-01-16 14:47:16.994311 | orchestrator | 2025-01-16 14:47:16 | INFO  | Task 206fb738-9a8d-4675-8d11-57c2acf3de34 (facts) was prepared for execution. 2025-01-16 14:47:19.052038 | orchestrator | 2025-01-16 14:47:16 | INFO  | It takes a moment until task 206fb738-9a8d-4675-8d11-57c2acf3de34 (facts) has been started and output is visible here. 2025-01-16 14:47:19.052160 | orchestrator | 2025-01-16 14:47:19.052428 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-01-16 14:47:19.052462 | orchestrator | 2025-01-16 14:47:19.053040 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-01-16 14:47:19.053111 | orchestrator | Thursday 16 January 2025 14:47:19 +0000 (0:00:00.053) 0:00:00.053 ****** 2025-01-16 14:47:19.900418 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:19.901741 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:19.901819 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:19.901839 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:19.901870 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:19.901991 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:19.902166 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:19.902198 | orchestrator | 2025-01-16 14:47:19.902462 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-01-16 14:47:19.902654 | orchestrator | Thursday 16 January 2025 14:47:19 +0000 (0:00:00.848) 0:00:00.902 ****** 2025-01-16 14:47:20.596110 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:20.596946 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:20.597004 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:20.597028 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:20.598393 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:20.598444 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:20.598456 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:20.598474 | orchestrator | 2025-01-16 14:47:20.633714 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-01-16 14:47:20.633808 | orchestrator | 2025-01-16 14:47:20.633815 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-01-16 14:47:20.633820 | orchestrator | Thursday 16 January 2025 14:47:20 +0000 (0:00:00.695) 0:00:01.598 ****** 2025-01-16 14:47:20.633836 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:20.660544 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:20.662584 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:20.662610 | orchestrator | 2025-01-16 14:47:20.662624 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-01-16 14:47:20.744984 | orchestrator | Thursday 16 January 2025 14:47:20 +0000 (0:00:00.065) 0:00:01.664 ****** 2025-01-16 14:47:20.745114 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:20.819995 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:20.820124 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:20.820146 | orchestrator | 2025-01-16 14:47:20.820164 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-01-16 14:47:20.820181 | orchestrator | Thursday 16 January 2025 14:47:20 +0000 (0:00:00.081) 0:00:01.745 ****** 2025-01-16 14:47:20.820211 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:20.820630 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:20.820681 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:20.901961 | orchestrator | 2025-01-16 14:47:20.902176 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-01-16 14:47:20.902195 | orchestrator | Thursday 16 January 2025 14:47:20 +0000 (0:00:00.078) 0:00:01.823 ****** 2025-01-16 14:47:20.902294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:47:21.163501 | orchestrator | 2025-01-16 14:47:21.163770 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-01-16 14:47:21.163792 | orchestrator | Thursday 16 January 2025 14:47:20 +0000 (0:00:00.082) 0:00:01.905 ****** 2025-01-16 14:47:21.163815 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:21.163909 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:21.163943 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:21.163951 | orchestrator | 2025-01-16 14:47:21.163959 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-01-16 14:47:21.163970 | orchestrator | Thursday 16 January 2025 14:47:21 +0000 (0:00:00.261) 0:00:02.166 ****** 2025-01-16 14:47:21.224945 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:21.225206 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:21.225232 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:21.225257 | orchestrator | 2025-01-16 14:47:21.225282 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-01-16 14:47:21.225317 | orchestrator | Thursday 16 January 2025 14:47:21 +0000 (0:00:00.061) 0:00:02.228 ****** 2025-01-16 14:47:21.762730 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:21.763154 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:21.763203 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:21.763230 | orchestrator | 2025-01-16 14:47:21.763348 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-01-16 14:47:21.763641 | orchestrator | Thursday 16 January 2025 14:47:21 +0000 (0:00:00.536) 0:00:02.764 ****** 2025-01-16 14:47:22.037691 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:22.037793 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:22.037811 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:22.038076 | orchestrator | 2025-01-16 14:47:22.038159 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-01-16 14:47:22.038332 | orchestrator | Thursday 16 January 2025 14:47:22 +0000 (0:00:00.276) 0:00:03.040 ****** 2025-01-16 14:47:22.585123 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:31.317518 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:31.318399 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:31.318443 | orchestrator | 2025-01-16 14:47:31.318464 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-01-16 14:47:31.318491 | orchestrator | Thursday 16 January 2025 14:47:22 +0000 (0:00:00.545) 0:00:03.586 ****** 2025-01-16 14:47:31.318542 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:31.352248 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:31.352408 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:31.352441 | orchestrator | 2025-01-16 14:47:31.352468 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-01-16 14:47:31.352492 | orchestrator | Thursday 16 January 2025 14:47:31 +0000 (0:00:08.731) 0:00:12.318 ****** 2025-01-16 14:47:31.352538 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:31.373051 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:31.374711 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:31.374777 | orchestrator | 2025-01-16 14:47:31.374806 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-01-16 14:47:31.374846 | orchestrator | Thursday 16 January 2025 14:47:31 +0000 (0:00:00.057) 0:00:12.376 ****** 2025-01-16 14:47:35.413689 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:35.648848 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:35.648934 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:35.648941 | orchestrator | 2025-01-16 14:47:35.648948 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-01-16 14:47:35.648954 | orchestrator | Thursday 16 January 2025 14:47:35 +0000 (0:00:04.037) 0:00:16.414 ****** 2025-01-16 14:47:35.648970 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:37.472275 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:37.472709 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:37.472742 | orchestrator | 2025-01-16 14:47:37.472758 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-01-16 14:47:37.472772 | orchestrator | Thursday 16 January 2025 14:47:35 +0000 (0:00:00.237) 0:00:16.651 ****** 2025-01-16 14:47:37.472803 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-01-16 14:47:37.473366 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-01-16 14:47:37.473401 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-01-16 14:47:37.473441 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-01-16 14:47:37.473456 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-01-16 14:47:37.473470 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-01-16 14:47:37.473484 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-01-16 14:47:37.473497 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-01-16 14:47:37.473511 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-01-16 14:47:37.473524 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-01-16 14:47:37.473549 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-01-16 14:47:37.473641 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-01-16 14:47:37.473655 | orchestrator | 2025-01-16 14:47:37.473669 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-01-16 14:47:37.473690 | orchestrator | Thursday 16 January 2025 14:47:37 +0000 (0:00:01.821) 0:00:18.473 ****** 2025-01-16 14:47:38.105098 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:38.105274 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:38.105294 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:38.105305 | orchestrator | 2025-01-16 14:47:38.105316 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-01-16 14:47:38.105332 | orchestrator | 2025-01-16 14:47:38.107719 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-01-16 14:47:38.107773 | orchestrator | Thursday 16 January 2025 14:47:38 +0000 (0:00:00.634) 0:00:19.108 ****** 2025-01-16 14:47:40.084553 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:40.084800 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:40.084837 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:40.084863 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:40.084888 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:40.084924 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:40.085069 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:40.085449 | orchestrator | 2025-01-16 14:47:40.085801 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:47:40.086264 | orchestrator | 2025-01-16 14:47:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:47:40.086790 | orchestrator | 2025-01-16 14:47:40 | INFO  | Please wait and do not abort execution. 2025-01-16 14:47:40.086918 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:47:40.087147 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:47:40.087206 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:47:40.087356 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:47:40.087521 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:47:40.087804 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:47:40.087915 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:47:40.087940 | orchestrator | 2025-01-16 14:47:40.088071 | orchestrator | 2025-01-16 14:47:40.088218 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:47:40.088434 | orchestrator | Thursday 16 January 2025 14:47:40 +0000 (0:00:01.979) 0:00:21.087 ****** 2025-01-16 14:47:40.088904 | orchestrator | =============================================================================== 2025-01-16 14:47:40.089020 | orchestrator | osism.commons.repository : Update package cache ------------------------- 8.73s 2025-01-16 14:47:40.089035 | orchestrator | Install required packages (Debian) -------------------------------------- 4.04s 2025-01-16 14:47:40.089052 | orchestrator | Gathers facts about hosts ----------------------------------------------- 1.98s 2025-01-16 14:47:40.089198 | orchestrator | Copy fact files --------------------------------------------------------- 1.82s 2025-01-16 14:47:40.089950 | orchestrator | Create custom facts directory ------------------------------------------- 0.85s 2025-01-16 14:47:40.090400 | orchestrator | Copy fact file ---------------------------------------------------------- 0.70s 2025-01-16 14:47:40.090450 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 0.63s 2025-01-16 14:47:40.090615 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.55s 2025-01-16 14:47:40.090748 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.54s 2025-01-16 14:47:40.090887 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.28s 2025-01-16 14:47:40.090924 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.26s 2025-01-16 14:47:40.091051 | orchestrator | Create custom facts directory ------------------------------------------- 0.24s 2025-01-16 14:47:40.091079 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.08s 2025-01-16 14:47:40.091215 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.08s 2025-01-16 14:47:40.091257 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.08s 2025-01-16 14:47:40.091367 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2025-01-16 14:47:40.091386 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.06s 2025-01-16 14:47:40.091408 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.06s 2025-01-16 14:47:40.304345 | orchestrator | + osism apply bootstrap 2025-01-16 14:47:41.257768 | orchestrator | 2025-01-16 14:47:41 | INFO  | Task 67acfcbb-8d76-409c-ba0c-439e14833629 (bootstrap) was prepared for execution. 2025-01-16 14:47:43.435675 | orchestrator | 2025-01-16 14:47:41 | INFO  | It takes a moment until task 67acfcbb-8d76-409c-ba0c-439e14833629 (bootstrap) has been started and output is visible here. 2025-01-16 14:47:43.435844 | orchestrator | 2025-01-16 14:47:43.435922 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-01-16 14:47:43.435937 | orchestrator | 2025-01-16 14:47:43.435946 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-01-16 14:47:43.435958 | orchestrator | Thursday 16 January 2025 14:47:43 +0000 (0:00:00.074) 0:00:00.074 ****** 2025-01-16 14:47:43.487737 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:43.508446 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:43.520798 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:43.538505 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:43.589849 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:43.590144 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:43.590170 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:43.590281 | orchestrator | 2025-01-16 14:47:43.590956 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-01-16 14:47:43.591065 | orchestrator | 2025-01-16 14:47:43.591117 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-01-16 14:47:43.591273 | orchestrator | Thursday 16 January 2025 14:47:43 +0000 (0:00:00.156) 0:00:00.231 ****** 2025-01-16 14:47:45.486428 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:45.487728 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:45.487783 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:45.487799 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:45.487842 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:45.487857 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:45.487879 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:45.487903 | orchestrator | 2025-01-16 14:47:45.487930 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-01-16 14:47:45.487956 | orchestrator | 2025-01-16 14:47:45.487990 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-01-16 14:47:45.547693 | orchestrator | Thursday 16 January 2025 14:47:45 +0000 (0:00:01.894) 0:00:02.125 ****** 2025-01-16 14:47:45.547830 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-01-16 14:47:45.570118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-01-16 14:47:45.570209 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-01-16 14:47:45.570218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 14:47:45.570237 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-01-16 14:47:45.600019 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-01-16 14:47:45.600197 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 14:47:45.600221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 14:47:45.600258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-01-16 14:47:45.807160 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-01-16 14:47:45.807286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 14:47:45.807305 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-01-16 14:47:45.807320 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-01-16 14:47:45.807350 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-01-16 14:47:45.808811 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-01-16 14:47:45.808843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 14:47:45.808860 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-01-16 14:47:45.808942 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-01-16 14:47:45.808967 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-01-16 14:47:45.809191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-01-16 14:47:45.809216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 14:47:45.809290 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:45.809317 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-01-16 14:47:45.809672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 14:47:45.809705 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-01-16 14:47:45.809963 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-01-16 14:47:45.809993 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:47:45.812891 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-01-16 14:47:45.813006 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-01-16 14:47:45.813034 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:45.814981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 14:47:45.815362 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-01-16 14:47:45.815498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 14:47:45.815524 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-01-16 14:47:45.815547 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-01-16 14:47:45.815748 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-01-16 14:47:45.815782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 14:47:45.815900 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-01-16 14:47:45.815926 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-01-16 14:47:45.816158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 14:47:45.816328 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-01-16 14:47:45.816354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 14:47:45.816380 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-01-16 14:47:45.816486 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-01-16 14:47:45.816912 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:45.817014 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:45.817034 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-01-16 14:47:45.817152 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-01-16 14:47:45.817312 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-01-16 14:47:45.817430 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-01-16 14:47:45.817548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-01-16 14:47:45.817821 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-01-16 14:47:45.817947 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-01-16 14:47:45.817965 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:45.818084 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-01-16 14:47:45.818201 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:45.818436 | orchestrator | 2025-01-16 14:47:45.818508 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-01-16 14:47:45.818683 | orchestrator | 2025-01-16 14:47:45.818796 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-01-16 14:47:45.818927 | orchestrator | Thursday 16 January 2025 14:47:45 +0000 (0:00:00.322) 0:00:02.448 ****** 2025-01-16 14:47:45.856323 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:45.872546 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:45.885736 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:45.901708 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:45.933322 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:45.933512 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:45.933623 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:45.933764 | orchestrator | 2025-01-16 14:47:45.934096 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-01-16 14:47:45.934371 | orchestrator | Thursday 16 January 2025 14:47:45 +0000 (0:00:00.125) 0:00:02.574 ****** 2025-01-16 14:47:46.698699 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:46.700842 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:46.700871 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:46.700886 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:46.701456 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:47.406902 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:47.407130 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:47.407151 | orchestrator | 2025-01-16 14:47:47.407169 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-01-16 14:47:47.407185 | orchestrator | Thursday 16 January 2025 14:47:46 +0000 (0:00:00.764) 0:00:03.339 ****** 2025-01-16 14:47:47.407217 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:47.408765 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:47.408799 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:47.408814 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:47.408893 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:47.408923 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:47.409224 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:47.409252 | orchestrator | 2025-01-16 14:47:47.409274 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-01-16 14:47:47.566225 | orchestrator | Thursday 16 January 2025 14:47:47 +0000 (0:00:00.708) 0:00:04.047 ****** 2025-01-16 14:47:47.566346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:47:48.808490 | orchestrator | 2025-01-16 14:47:48.808752 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-01-16 14:47:48.808786 | orchestrator | Thursday 16 January 2025 14:47:47 +0000 (0:00:00.158) 0:00:04.206 ****** 2025-01-16 14:47:48.808818 | orchestrator | changed: [testbed-manager] 2025-01-16 14:47:48.808981 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:48.809005 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:48.809032 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:48.809044 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:48.809073 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:48.809441 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:48.809694 | orchestrator | 2025-01-16 14:47:48.809715 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-01-16 14:47:48.809887 | orchestrator | Thursday 16 January 2025 14:47:48 +0000 (0:00:01.242) 0:00:05.448 ****** 2025-01-16 14:47:48.855234 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:47:48.966139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:47:49.533248 | orchestrator | 2025-01-16 14:47:49.533552 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-01-16 14:47:49.533660 | orchestrator | Thursday 16 January 2025 14:47:48 +0000 (0:00:00.158) 0:00:05.606 ****** 2025-01-16 14:47:49.533691 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:49.533821 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:49.533842 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:49.533855 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:49.533867 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:49.533885 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:49.534227 | orchestrator | 2025-01-16 14:47:49.534381 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-01-16 14:47:49.534649 | orchestrator | Thursday 16 January 2025 14:47:49 +0000 (0:00:00.566) 0:00:06.173 ****** 2025-01-16 14:47:49.572240 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:47:49.890714 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:49.890876 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:49.890892 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:49.890900 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:49.890912 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:49.891057 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:49.891698 | orchestrator | 2025-01-16 14:47:49.891896 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-01-16 14:47:49.891979 | orchestrator | Thursday 16 January 2025 14:47:49 +0000 (0:00:00.358) 0:00:06.531 ****** 2025-01-16 14:47:49.951274 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:49.966238 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:49.983349 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:50.158535 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:50.158698 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:50.158710 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:50.158964 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:50.158978 | orchestrator | 2025-01-16 14:47:50.159149 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-01-16 14:47:50.159671 | orchestrator | Thursday 16 January 2025 14:47:50 +0000 (0:00:00.268) 0:00:06.799 ****** 2025-01-16 14:47:50.204325 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:47:50.220191 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:50.234740 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:50.251301 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:50.283435 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:50.283708 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:50.283750 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:50.283771 | orchestrator | 2025-01-16 14:47:50.283956 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-01-16 14:47:50.284170 | orchestrator | Thursday 16 January 2025 14:47:50 +0000 (0:00:00.125) 0:00:06.925 ****** 2025-01-16 14:47:50.462530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:47:50.645722 | orchestrator | 2025-01-16 14:47:50.645819 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-01-16 14:47:50.645882 | orchestrator | Thursday 16 January 2025 14:47:50 +0000 (0:00:00.178) 0:00:07.103 ****** 2025-01-16 14:47:50.645906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:47:51.368297 | orchestrator | 2025-01-16 14:47:51.368440 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-01-16 14:47:51.368457 | orchestrator | Thursday 16 January 2025 14:47:50 +0000 (0:00:00.182) 0:00:07.286 ****** 2025-01-16 14:47:51.368485 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:51.368662 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:51.368678 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:51.368686 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:51.368694 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:51.368702 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:51.368710 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:51.368722 | orchestrator | 2025-01-16 14:47:51.368784 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-01-16 14:47:51.369005 | orchestrator | Thursday 16 January 2025 14:47:51 +0000 (0:00:00.722) 0:00:08.008 ****** 2025-01-16 14:47:51.412213 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:47:51.428314 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:51.440952 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:51.454986 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:51.489961 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:51.490262 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:51.490300 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:51.490406 | orchestrator | 2025-01-16 14:47:51.491035 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-01-16 14:47:51.804447 | orchestrator | Thursday 16 January 2025 14:47:51 +0000 (0:00:00.122) 0:00:08.131 ****** 2025-01-16 14:47:51.804729 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:51.804838 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:51.804861 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:51.804875 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:51.804889 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:51.804898 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:51.804911 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:51.805067 | orchestrator | 2025-01-16 14:47:51.805231 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-01-16 14:47:51.805428 | orchestrator | Thursday 16 January 2025 14:47:51 +0000 (0:00:00.313) 0:00:08.444 ****** 2025-01-16 14:47:51.849114 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:47:51.864727 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:51.879690 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:51.894880 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:51.941144 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:51.942904 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:51.942997 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:51.943013 | orchestrator | 2025-01-16 14:47:51.943041 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-01-16 14:47:52.263445 | orchestrator | Thursday 16 January 2025 14:47:51 +0000 (0:00:00.136) 0:00:08.581 ****** 2025-01-16 14:47:52.263590 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:52.907228 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:52.907376 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:52.907393 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:52.907402 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:52.907412 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:52.907420 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:52.907429 | orchestrator | 2025-01-16 14:47:52.907440 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-01-16 14:47:52.907450 | orchestrator | Thursday 16 January 2025 14:47:52 +0000 (0:00:00.321) 0:00:08.903 ****** 2025-01-16 14:47:52.907474 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:52.907537 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:52.907549 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:52.907558 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:52.907615 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:52.907629 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:52.907867 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:52.908107 | orchestrator | 2025-01-16 14:47:52.908192 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-01-16 14:47:52.908429 | orchestrator | Thursday 16 January 2025 14:47:52 +0000 (0:00:00.641) 0:00:09.545 ****** 2025-01-16 14:47:53.591203 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:53.593799 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:53.593872 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:53.593883 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:53.593900 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:53.825207 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:53.825315 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:53.825330 | orchestrator | 2025-01-16 14:47:53.825344 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-01-16 14:47:53.825357 | orchestrator | Thursday 16 January 2025 14:47:53 +0000 (0:00:00.686) 0:00:10.231 ****** 2025-01-16 14:47:53.825403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:47:53.826445 | orchestrator | 2025-01-16 14:47:53.826469 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-01-16 14:47:53.826487 | orchestrator | Thursday 16 January 2025 14:47:53 +0000 (0:00:00.231) 0:00:10.463 ****** 2025-01-16 14:47:53.874390 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:47:54.615064 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:54.615314 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:47:54.615441 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:47:54.615478 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:47:54.615830 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:54.615990 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:54.616313 | orchestrator | 2025-01-16 14:47:54.616451 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-01-16 14:47:54.661388 | orchestrator | Thursday 16 January 2025 14:47:54 +0000 (0:00:00.791) 0:00:11.255 ****** 2025-01-16 14:47:54.661492 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:54.678184 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:54.692665 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:54.707095 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:54.742964 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:54.743225 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:54.743250 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:54.743271 | orchestrator | 2025-01-16 14:47:54.744666 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-01-16 14:47:54.744789 | orchestrator | Thursday 16 January 2025 14:47:54 +0000 (0:00:00.128) 0:00:11.384 ****** 2025-01-16 14:47:54.791521 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:54.807346 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:54.823102 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:54.838276 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:54.881650 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:54.881908 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:54.881931 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:54.881954 | orchestrator | 2025-01-16 14:47:54.882110 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-01-16 14:47:54.882143 | orchestrator | Thursday 16 January 2025 14:47:54 +0000 (0:00:00.138) 0:00:11.522 ****** 2025-01-16 14:47:54.929651 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:54.947179 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:54.959673 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:54.976075 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:55.011343 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:55.011557 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:55.011764 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:55.011784 | orchestrator | 2025-01-16 14:47:55.011794 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-01-16 14:47:55.011828 | orchestrator | Thursday 16 January 2025 14:47:55 +0000 (0:00:00.130) 0:00:11.653 ****** 2025-01-16 14:47:55.202428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:47:55.202698 | orchestrator | 2025-01-16 14:47:55.202751 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-01-16 14:47:55.202786 | orchestrator | Thursday 16 January 2025 14:47:55 +0000 (0:00:00.190) 0:00:11.843 ****** 2025-01-16 14:47:55.508941 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:55.509059 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:55.509068 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:55.509076 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:55.509121 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:55.509418 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:55.509694 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:55.509838 | orchestrator | 2025-01-16 14:47:55.509981 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-01-16 14:47:55.510163 | orchestrator | Thursday 16 January 2025 14:47:55 +0000 (0:00:00.306) 0:00:12.150 ****** 2025-01-16 14:47:55.553667 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:47:55.570304 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:47:55.584833 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:47:55.601726 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:47:55.641314 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:47:55.641524 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:47:55.641548 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:47:55.641621 | orchestrator | 2025-01-16 14:47:55.641814 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-01-16 14:47:56.204954 | orchestrator | Thursday 16 January 2025 14:47:55 +0000 (0:00:00.132) 0:00:12.282 ****** 2025-01-16 14:47:56.205087 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:56.205943 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:47:56.205972 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:47:56.205979 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:56.205992 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:56.206506 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:47:56.206683 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:56.206808 | orchestrator | 2025-01-16 14:47:56.206827 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-01-16 14:47:56.206842 | orchestrator | Thursday 16 January 2025 14:47:56 +0000 (0:00:00.563) 0:00:12.846 ****** 2025-01-16 14:47:56.530594 | orchestrator | ok: [testbed-manager] 2025-01-16 14:47:56.531118 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:47:56.531178 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:47:56.531204 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:47:56.531251 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:47:56.531278 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:47:56.531317 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:47:57.111173 | orchestrator | 2025-01-16 14:47:57.111285 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-01-16 14:47:57.111299 | orchestrator | Thursday 16 January 2025 14:47:56 +0000 (0:00:00.325) 0:00:13.171 ****** 2025-01-16 14:47:57.111322 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:06.213002 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:06.213178 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:48:06.213203 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:48:06.213217 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:06.213232 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:48:06.213246 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:06.213260 | orchestrator | 2025-01-16 14:48:06.213276 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-01-16 14:48:06.213292 | orchestrator | Thursday 16 January 2025 14:47:57 +0000 (0:00:00.578) 0:00:13.750 ****** 2025-01-16 14:48:06.213326 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:06.263231 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:06.263347 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:06.263364 | orchestrator | changed: [testbed-manager] 2025-01-16 14:48:06.263378 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:48:06.263390 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:48:06.263403 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:48:06.263416 | orchestrator | 2025-01-16 14:48:06.263430 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-01-16 14:48:06.263444 | orchestrator | Thursday 16 January 2025 14:48:06 +0000 (0:00:09.102) 0:00:22.852 ****** 2025-01-16 14:48:06.263472 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:06.275535 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:06.292383 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:06.306241 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:06.341876 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:06.342328 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:06.342365 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:06.342403 | orchestrator | 2025-01-16 14:48:06.386761 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-01-16 14:48:06.386889 | orchestrator | Thursday 16 January 2025 14:48:06 +0000 (0:00:00.130) 0:00:22.983 ****** 2025-01-16 14:48:06.386929 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:06.402304 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:06.416127 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:06.431186 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:06.468548 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:06.468730 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:06.468747 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:06.468897 | orchestrator | 2025-01-16 14:48:06.469147 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-01-16 14:48:06.469439 | orchestrator | Thursday 16 January 2025 14:48:06 +0000 (0:00:00.127) 0:00:23.110 ****** 2025-01-16 14:48:06.514449 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:06.530320 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:06.545107 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:06.561331 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:06.595330 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:06.595447 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:06.595468 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:06.595683 | orchestrator | 2025-01-16 14:48:06.595910 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-01-16 14:48:06.596116 | orchestrator | Thursday 16 January 2025 14:48:06 +0000 (0:00:00.126) 0:00:23.236 ****** 2025-01-16 14:48:06.762284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:48:06.762557 | orchestrator | 2025-01-16 14:48:06.762622 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-01-16 14:48:06.762658 | orchestrator | Thursday 16 January 2025 14:48:06 +0000 (0:00:00.166) 0:00:23.403 ****** 2025-01-16 14:48:07.635391 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:07.635482 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:07.635493 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:07.635501 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:07.635557 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:07.635613 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:07.635739 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:07.635758 | orchestrator | 2025-01-16 14:48:07.635945 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-01-16 14:48:07.636095 | orchestrator | Thursday 16 January 2025 14:48:07 +0000 (0:00:00.872) 0:00:24.275 ****** 2025-01-16 14:48:08.229153 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:48:08.229834 | orchestrator | changed: [testbed-manager] 2025-01-16 14:48:08.230120 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:48:08.230409 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:48:08.230762 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:48:08.230963 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:48:08.231331 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:48:08.231721 | orchestrator | 2025-01-16 14:48:08.232057 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-01-16 14:48:08.232262 | orchestrator | Thursday 16 January 2025 14:48:08 +0000 (0:00:00.593) 0:00:24.869 ****** 2025-01-16 14:48:08.701807 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:08.701957 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:08.701974 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:08.702302 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:08.702822 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:08.703042 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:08.703058 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:08.703069 | orchestrator | 2025-01-16 14:48:08.703233 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-01-16 14:48:08.703361 | orchestrator | Thursday 16 January 2025 14:48:08 +0000 (0:00:00.473) 0:00:25.342 ****** 2025-01-16 14:48:08.875319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:48:08.875533 | orchestrator | 2025-01-16 14:48:08.875554 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-01-16 14:48:08.875631 | orchestrator | Thursday 16 January 2025 14:48:08 +0000 (0:00:00.174) 0:00:25.516 ****** 2025-01-16 14:48:09.474066 | orchestrator | changed: [testbed-manager] 2025-01-16 14:48:09.476085 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:48:09.476380 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:48:09.476745 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:48:09.477029 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:48:09.477253 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:48:09.477528 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:48:09.477736 | orchestrator | 2025-01-16 14:48:09.477967 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-01-16 14:48:09.480610 | orchestrator | Thursday 16 January 2025 14:48:09 +0000 (0:00:00.594) 0:00:26.110 ****** 2025-01-16 14:48:09.533669 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:48:09.545655 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:48:09.562504 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:48:09.647417 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:48:09.647863 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:48:09.647928 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:48:09.647945 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:48:09.647959 | orchestrator | 2025-01-16 14:48:09.647983 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-01-16 14:48:15.641623 | orchestrator | Thursday 16 January 2025 14:48:09 +0000 (0:00:00.177) 0:00:26.288 ****** 2025-01-16 14:48:15.641796 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:48:15.641876 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:48:15.641889 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:48:15.641897 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:48:15.641904 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:48:15.641912 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:48:15.641924 | orchestrator | changed: [testbed-manager] 2025-01-16 14:48:15.642070 | orchestrator | 2025-01-16 14:48:15.642184 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-01-16 14:48:15.642322 | orchestrator | Thursday 16 January 2025 14:48:15 +0000 (0:00:05.993) 0:00:32.281 ****** 2025-01-16 14:48:16.604925 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:16.605060 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:16.605074 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:16.605083 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:16.605091 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:16.605108 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:16.605242 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:16.605327 | orchestrator | 2025-01-16 14:48:16.605603 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-01-16 14:48:16.605764 | orchestrator | Thursday 16 January 2025 14:48:16 +0000 (0:00:00.964) 0:00:33.246 ****** 2025-01-16 14:48:17.172323 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:17.173357 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:17.218794 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:17.218889 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:17.218899 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:17.218908 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:17.218921 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:17.218935 | orchestrator | 2025-01-16 14:48:17.218949 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-01-16 14:48:17.218963 | orchestrator | Thursday 16 January 2025 14:48:17 +0000 (0:00:00.566) 0:00:33.812 ****** 2025-01-16 14:48:17.218991 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:17.235443 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:17.251611 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:17.267815 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:17.304979 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:17.305410 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:17.305455 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:17.349570 | orchestrator | 2025-01-16 14:48:17.349760 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-01-16 14:48:17.349779 | orchestrator | Thursday 16 January 2025 14:48:17 +0000 (0:00:00.133) 0:00:33.945 ****** 2025-01-16 14:48:17.349823 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:17.365145 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:17.381178 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:17.397040 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:17.435024 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:17.435153 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:17.435165 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:17.435173 | orchestrator | 2025-01-16 14:48:17.435184 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-01-16 14:48:17.435374 | orchestrator | Thursday 16 January 2025 14:48:17 +0000 (0:00:00.130) 0:00:34.076 ****** 2025-01-16 14:48:17.622850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:48:18.658954 | orchestrator | 2025-01-16 14:48:18.659167 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-01-16 14:48:18.659277 | orchestrator | Thursday 16 January 2025 14:48:17 +0000 (0:00:00.187) 0:00:34.264 ****** 2025-01-16 14:48:18.659311 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:18.659617 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:18.659644 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:18.659665 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:18.659850 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:18.660233 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:18.660736 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:18.661659 | orchestrator | 2025-01-16 14:48:18.662128 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-01-16 14:48:18.662377 | orchestrator | Thursday 16 January 2025 14:48:18 +0000 (0:00:01.034) 0:00:35.298 ****** 2025-01-16 14:48:19.069358 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:48:19.069517 | orchestrator | changed: [testbed-manager] 2025-01-16 14:48:19.069542 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:48:19.069565 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:48:19.070332 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:48:19.071691 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:48:19.071734 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:48:19.071744 | orchestrator | 2025-01-16 14:48:19.071761 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-01-16 14:48:19.071900 | orchestrator | Thursday 16 January 2025 14:48:19 +0000 (0:00:00.409) 0:00:35.707 ****** 2025-01-16 14:48:19.102306 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:19.135305 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:19.153251 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:19.173115 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:19.210810 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:19.210939 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:19.210953 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:19.211051 | orchestrator | 2025-01-16 14:48:19.211306 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-01-16 14:48:19.211478 | orchestrator | Thursday 16 January 2025 14:48:19 +0000 (0:00:00.144) 0:00:35.852 ****** 2025-01-16 14:48:19.904152 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:19.904293 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:19.904414 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:19.904442 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:19.904841 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:19.904896 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:19.905161 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:19.905209 | orchestrator | 2025-01-16 14:48:19.905318 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-01-16 14:48:19.905607 | orchestrator | Thursday 16 January 2025 14:48:19 +0000 (0:00:00.691) 0:00:36.543 ****** 2025-01-16 14:48:20.814982 | orchestrator | changed: [testbed-manager] 2025-01-16 14:48:20.815638 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:48:20.815818 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:48:20.815848 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:48:20.815934 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:48:20.816158 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:48:20.816185 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:48:20.816352 | orchestrator | 2025-01-16 14:48:20.817509 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-01-16 14:48:22.072322 | orchestrator | Thursday 16 January 2025 14:48:20 +0000 (0:00:00.911) 0:00:37.455 ****** 2025-01-16 14:48:22.072456 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:22.072820 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:22.072844 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:48:22.072864 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:48:22.073146 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:48:22.073169 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:48:22.073220 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:48:22.073385 | orchestrator | 2025-01-16 14:48:22.073408 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-01-16 14:48:22.073648 | orchestrator | Thursday 16 January 2025 14:48:22 +0000 (0:00:01.257) 0:00:38.713 ****** 2025-01-16 14:48:56.037042 | orchestrator | ok: [testbed-manager] 2025-01-16 14:48:56.038089 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:48:56.038209 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:32.423699 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:32.424184 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:32.424224 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:32.424250 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:32.424278 | orchestrator | 2025-01-16 14:49:32.424306 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-01-16 14:49:32.424333 | orchestrator | Thursday 16 January 2025 14:48:56 +0000 (0:00:33.963) 0:01:12.676 ****** 2025-01-16 14:49:32.424380 | orchestrator | changed: [testbed-manager] 2025-01-16 14:49:33.420202 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:49:33.420469 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:49:33.420676 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:49:33.420707 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:49:33.420725 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:49:33.420851 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:49:33.420862 | orchestrator | 2025-01-16 14:49:33.420874 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-01-16 14:49:33.420885 | orchestrator | Thursday 16 January 2025 14:49:32 +0000 (0:00:36.382) 0:01:49.059 ****** 2025-01-16 14:49:33.420910 | orchestrator | ok: [testbed-manager] 2025-01-16 14:49:33.421032 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:33.421052 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:49:33.421069 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:33.421116 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:33.421190 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:33.421201 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:33.421211 | orchestrator | 2025-01-16 14:49:33.421226 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-01-16 14:49:37.025706 | orchestrator | Thursday 16 January 2025 14:49:33 +0000 (0:00:01.001) 0:01:50.061 ****** 2025-01-16 14:49:37.025868 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:37.026143 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:37.026161 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:37.026166 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:49:37.026171 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:37.026176 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:37.026182 | orchestrator | changed: [testbed-manager] 2025-01-16 14:49:37.026188 | orchestrator | 2025-01-16 14:49:37.026193 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-01-16 14:49:37.026210 | orchestrator | Thursday 16 January 2025 14:49:37 +0000 (0:00:03.601) 0:01:53.662 ****** 2025-01-16 14:49:37.246655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-01-16 14:49:37.247213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-01-16 14:49:37.247518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-01-16 14:49:37.247560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-01-16 14:49:37.247576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-01-16 14:49:37.247602 | orchestrator | 2025-01-16 14:49:37.265711 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-01-16 14:49:37.265811 | orchestrator | Thursday 16 January 2025 14:49:37 +0000 (0:00:00.224) 0:01:53.887 ****** 2025-01-16 14:49:37.265839 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-01-16 14:49:37.289080 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:49:37.350158 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-01-16 14:49:37.635052 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:49:37.635269 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-01-16 14:49:37.635293 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:49:37.635357 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-01-16 14:49:37.635464 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:49:37.635739 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-01-16 14:49:37.636104 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-01-16 14:49:37.636248 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-01-16 14:49:37.636638 | orchestrator | 2025-01-16 14:49:37.636674 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-01-16 14:49:37.637070 | orchestrator | Thursday 16 January 2025 14:49:37 +0000 (0:00:00.388) 0:01:54.275 ****** 2025-01-16 14:49:37.672062 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-01-16 14:49:37.672367 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-01-16 14:49:37.672407 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-01-16 14:49:37.672442 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-01-16 14:49:37.672925 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-01-16 14:49:37.673074 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-01-16 14:49:37.673205 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-01-16 14:49:37.676695 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-01-16 14:49:37.690864 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-01-16 14:49:37.690977 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-01-16 14:49:37.709141 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:49:37.749198 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-01-16 14:49:37.749393 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-01-16 14:49:37.749753 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-01-16 14:49:37.749791 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-01-16 14:49:37.750157 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-01-16 14:49:37.750431 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-01-16 14:49:37.751141 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-01-16 14:49:37.751944 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-01-16 14:49:37.751975 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-01-16 14:49:37.752197 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-01-16 14:49:39.820179 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-01-16 14:49:39.820386 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-01-16 14:49:39.820434 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-01-16 14:49:39.820730 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-01-16 14:49:39.822589 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:49:39.823026 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-01-16 14:49:39.823051 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-01-16 14:49:39.823064 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-01-16 14:49:39.823076 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-01-16 14:49:39.823092 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-01-16 14:49:39.823477 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-01-16 14:49:39.824459 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-01-16 14:49:39.825445 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-01-16 14:49:39.825681 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-01-16 14:49:39.825720 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:49:39.826252 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-01-16 14:49:39.826737 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-01-16 14:49:39.827446 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-01-16 14:49:39.827723 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-01-16 14:49:39.828156 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-01-16 14:49:39.828520 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-01-16 14:49:39.829185 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-01-16 14:49:39.829484 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:49:39.829848 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-01-16 14:49:39.830135 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-01-16 14:49:39.830603 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-01-16 14:49:39.830935 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-01-16 14:49:39.831147 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-01-16 14:49:39.831432 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-01-16 14:49:39.831643 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-01-16 14:49:39.832221 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-01-16 14:49:39.832382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-01-16 14:49:39.832428 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-01-16 14:49:39.832816 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-01-16 14:49:39.833070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-01-16 14:49:39.833517 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-01-16 14:49:39.833986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-01-16 14:49:39.834167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-01-16 14:49:39.834434 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-01-16 14:49:39.834453 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-01-16 14:49:39.835376 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-01-16 14:49:39.835450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-01-16 14:49:39.835460 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-01-16 14:49:39.835469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-01-16 14:49:39.835796 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-01-16 14:49:39.836593 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-01-16 14:49:39.836906 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-01-16 14:49:39.836930 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-01-16 14:49:39.836940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-01-16 14:49:39.839162 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-01-16 14:49:40.208948 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-01-16 14:49:40.209133 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-01-16 14:49:40.209169 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-01-16 14:49:40.209197 | orchestrator | 2025-01-16 14:49:40.209223 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-01-16 14:49:40.209248 | orchestrator | Thursday 16 January 2025 14:49:39 +0000 (0:00:02.185) 0:01:56.460 ****** 2025-01-16 14:49:40.209330 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-01-16 14:49:40.209822 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-01-16 14:49:40.209886 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-01-16 14:49:40.242345 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-01-16 14:49:40.242432 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-01-16 14:49:40.242444 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-01-16 14:49:40.242453 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-01-16 14:49:40.242462 | orchestrator | 2025-01-16 14:49:40.242471 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-01-16 14:49:40.242481 | orchestrator | Thursday 16 January 2025 14:49:40 +0000 (0:00:00.387) 0:01:56.848 ****** 2025-01-16 14:49:40.242505 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-01-16 14:49:40.259817 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-01-16 14:49:40.259943 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:49:40.275228 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-01-16 14:49:40.275333 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:49:40.296364 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:49:40.296744 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-01-16 14:49:40.308997 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:49:41.561978 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-01-16 14:49:41.579914 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-01-16 14:49:41.580030 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-01-16 14:49:41.580049 | orchestrator | 2025-01-16 14:49:41.580064 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-01-16 14:49:41.580079 | orchestrator | Thursday 16 January 2025 14:49:41 +0000 (0:00:01.351) 0:01:58.199 ****** 2025-01-16 14:49:41.580108 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-01-16 14:49:41.599081 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-01-16 14:49:41.618342 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:49:41.637695 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-01-16 14:49:41.637837 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:49:41.653046 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-01-16 14:49:41.653207 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:49:41.671968 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:49:42.939505 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-01-16 14:49:42.939831 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-01-16 14:49:42.939889 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-01-16 14:49:42.941931 | orchestrator | 2025-01-16 14:49:42.942133 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-01-16 14:49:42.942170 | orchestrator | Thursday 16 January 2025 14:49:42 +0000 (0:00:01.380) 0:01:59.579 ****** 2025-01-16 14:49:42.969979 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:49:42.985093 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:49:42.999702 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:49:43.014394 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:49:43.028928 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:49:43.109828 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:49:43.110011 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:49:43.110082 | orchestrator | 2025-01-16 14:49:43.110102 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-01-16 14:49:43.110227 | orchestrator | Thursday 16 January 2025 14:49:43 +0000 (0:00:00.171) 0:01:59.750 ****** 2025-01-16 14:49:45.586309 | orchestrator | ok: [testbed-manager] 2025-01-16 14:49:45.586430 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:45.586440 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:45.586462 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:45.586471 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:45.586641 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:45.586923 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:49:45.586976 | orchestrator | 2025-01-16 14:49:45.587163 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-01-16 14:49:45.591027 | orchestrator | Thursday 16 January 2025 14:49:45 +0000 (0:00:02.476) 0:02:02.227 ****** 2025-01-16 14:49:45.609768 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-01-16 14:49:45.633442 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:49:45.656779 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-01-16 14:49:45.656918 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:49:45.679007 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-01-16 14:49:45.679117 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:49:45.702083 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-01-16 14:49:45.702211 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:49:45.724099 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-01-16 14:49:45.724231 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:49:45.769095 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-01-16 14:49:45.769288 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:49:45.769437 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-01-16 14:49:45.769453 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:49:45.769461 | orchestrator | 2025-01-16 14:49:45.769473 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-01-16 14:49:45.769559 | orchestrator | Thursday 16 January 2025 14:49:45 +0000 (0:00:00.182) 0:02:02.410 ****** 2025-01-16 14:49:46.453966 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-01-16 14:49:46.454076 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-01-16 14:49:46.454084 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-01-16 14:49:46.454090 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-01-16 14:49:46.454095 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-01-16 14:49:46.454100 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-01-16 14:49:46.454106 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-01-16 14:49:46.454114 | orchestrator | 2025-01-16 14:49:46.454161 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-01-16 14:49:46.454409 | orchestrator | Thursday 16 January 2025 14:49:46 +0000 (0:00:00.683) 0:02:03.093 ****** 2025-01-16 14:49:46.804239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:49:47.549352 | orchestrator | 2025-01-16 14:49:47.549530 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-01-16 14:49:47.549550 | orchestrator | Thursday 16 January 2025 14:49:46 +0000 (0:00:00.351) 0:02:03.445 ****** 2025-01-16 14:49:47.549577 | orchestrator | ok: [testbed-manager] 2025-01-16 14:49:47.550898 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:47.550925 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:47.551030 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:49:47.551061 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:47.551375 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:47.551406 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:47.551701 | orchestrator | 2025-01-16 14:49:47.551802 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-01-16 14:49:47.551827 | orchestrator | Thursday 16 January 2025 14:49:47 +0000 (0:00:00.744) 0:02:04.190 ****** 2025-01-16 14:49:47.971242 | orchestrator | ok: [testbed-manager] 2025-01-16 14:49:47.972290 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:49:47.972327 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:47.972579 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:47.972647 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:47.972657 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:47.972666 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:47.972671 | orchestrator | 2025-01-16 14:49:47.972680 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-01-16 14:49:48.385572 | orchestrator | Thursday 16 January 2025 14:49:47 +0000 (0:00:00.422) 0:02:04.612 ****** 2025-01-16 14:49:48.385737 | orchestrator | changed: [testbed-manager] 2025-01-16 14:49:48.386132 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:49:48.386159 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:49:48.387929 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:49:48.777520 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:49:48.777750 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:49:48.777789 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:49:48.777812 | orchestrator | 2025-01-16 14:49:48.777837 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-01-16 14:49:48.777859 | orchestrator | Thursday 16 January 2025 14:49:48 +0000 (0:00:00.413) 0:02:05.026 ****** 2025-01-16 14:49:48.777903 | orchestrator | ok: [testbed-manager] 2025-01-16 14:49:48.778483 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:49:48.778543 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:48.778866 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:48.778889 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:48.778907 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:48.779272 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:48.779791 | orchestrator | 2025-01-16 14:49:48.780374 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-01-16 14:49:48.780684 | orchestrator | Thursday 16 January 2025 14:49:48 +0000 (0:00:00.391) 0:02:05.417 ****** 2025-01-16 14:49:49.424520 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1737037823.9698102, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.424824 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1737038828.678281, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.424866 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1737038828.6213286, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.424994 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1737038828.6918843, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.425385 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1737038828.6791995, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.425430 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1737038828.6574657, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.425710 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1737038828.6738708, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.425885 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1737037835.1478102, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.426132 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1737037740.0791554, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.426570 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1737037738.688078, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.426845 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1737037746.1483681, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.427096 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1737037747.303335, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.427488 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1737037737.7708373, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.427798 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1737037739.1711037, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 14:49:49.427981 | orchestrator | 2025-01-16 14:49:49.428318 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-01-16 14:49:49.428651 | orchestrator | Thursday 16 January 2025 14:49:49 +0000 (0:00:00.646) 0:02:06.064 ****** 2025-01-16 14:49:50.115590 | orchestrator | changed: [testbed-manager] 2025-01-16 14:49:50.115855 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:49:50.116125 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:49:50.116151 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:49:50.116168 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:49:50.116782 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:49:50.116805 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:49:50.116843 | orchestrator | 2025-01-16 14:49:50.117053 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-01-16 14:49:50.117359 | orchestrator | Thursday 16 January 2025 14:49:50 +0000 (0:00:00.691) 0:02:06.756 ****** 2025-01-16 14:49:50.800797 | orchestrator | changed: [testbed-manager] 2025-01-16 14:49:50.800958 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:49:50.800981 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:49:50.800999 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:49:50.801179 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:49:50.801269 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:49:50.801783 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:49:50.806931 | orchestrator | 2025-01-16 14:49:51.536361 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-01-16 14:49:51.536516 | orchestrator | Thursday 16 January 2025 14:49:50 +0000 (0:00:00.685) 0:02:07.441 ****** 2025-01-16 14:49:51.536556 | orchestrator | changed: [testbed-manager] 2025-01-16 14:49:51.537556 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:49:51.575138 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:49:51.575278 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:49:51.575303 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:49:51.575324 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:49:51.575345 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:49:51.575366 | orchestrator | 2025-01-16 14:49:51.575388 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-01-16 14:49:51.575409 | orchestrator | Thursday 16 January 2025 14:49:51 +0000 (0:00:00.734) 0:02:08.175 ****** 2025-01-16 14:49:51.575448 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:49:51.604089 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:49:51.624776 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:49:51.645379 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:49:51.665592 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:49:51.703372 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:49:51.703518 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:49:51.703550 | orchestrator | 2025-01-16 14:49:51.703574 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-01-16 14:49:51.703611 | orchestrator | Thursday 16 January 2025 14:49:51 +0000 (0:00:00.168) 0:02:08.344 ****** 2025-01-16 14:49:52.178416 | orchestrator | ok: [testbed-manager] 2025-01-16 14:49:52.180683 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:52.180776 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:49:52.433172 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:52.433293 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:52.433312 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:52.433328 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:52.433343 | orchestrator | 2025-01-16 14:49:52.433360 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-01-16 14:49:52.433378 | orchestrator | Thursday 16 January 2025 14:49:52 +0000 (0:00:00.474) 0:02:08.818 ****** 2025-01-16 14:49:52.433410 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:49:57.168905 | orchestrator | 2025-01-16 14:49:57.169016 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-01-16 14:49:57.169031 | orchestrator | Thursday 16 January 2025 14:49:52 +0000 (0:00:00.255) 0:02:09.073 ****** 2025-01-16 14:49:57.169054 | orchestrator | ok: [testbed-manager] 2025-01-16 14:49:57.169339 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:49:57.169454 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:49:57.169464 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:49:57.169470 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:49:57.169484 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:49:57.170505 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:49:57.170561 | orchestrator | 2025-01-16 14:49:57.853838 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-01-16 14:49:57.854104 | orchestrator | Thursday 16 January 2025 14:49:57 +0000 (0:00:04.733) 0:02:13.807 ****** 2025-01-16 14:49:57.854146 | orchestrator | ok: [testbed-manager] 2025-01-16 14:49:57.854570 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:57.854607 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:49:57.854616 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:57.854678 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:57.854916 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:57.855015 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:57.856942 | orchestrator | 2025-01-16 14:49:58.548360 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-01-16 14:49:58.548732 | orchestrator | Thursday 16 January 2025 14:49:57 +0000 (0:00:00.687) 0:02:14.495 ****** 2025-01-16 14:49:58.548788 | orchestrator | ok: [testbed-manager] 2025-01-16 14:49:58.549096 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:49:58.549123 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:49:58.549138 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:49:58.549213 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:49:58.549418 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:49:58.549994 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:49:58.550185 | orchestrator | 2025-01-16 14:49:58.550219 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-01-16 14:49:58.550428 | orchestrator | Thursday 16 January 2025 14:49:58 +0000 (0:00:00.689) 0:02:15.185 ****** 2025-01-16 14:49:58.797688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:49:58.797832 | orchestrator | 2025-01-16 14:49:58.797843 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-01-16 14:49:58.797854 | orchestrator | Thursday 16 January 2025 14:49:58 +0000 (0:00:00.253) 0:02:15.438 ****** 2025-01-16 14:50:03.555427 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:03.939713 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:03.939909 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:03.939931 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:03.939943 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:03.939955 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:03.939966 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:03.939977 | orchestrator | 2025-01-16 14:50:03.939991 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-01-16 14:50:03.940004 | orchestrator | Thursday 16 January 2025 14:50:03 +0000 (0:00:04.755) 0:02:20.194 ****** 2025-01-16 14:50:03.940032 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:03.940161 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:03.940179 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:03.940190 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:03.940201 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:03.940216 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:03.940470 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:03.940750 | orchestrator | 2025-01-16 14:50:03.941005 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-01-16 14:50:03.941292 | orchestrator | Thursday 16 January 2025 14:50:03 +0000 (0:00:00.386) 0:02:20.580 ****** 2025-01-16 14:50:04.619520 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:04.620069 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:04.620262 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:04.620336 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:04.620363 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:05.242266 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:05.242458 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:05.242479 | orchestrator | 2025-01-16 14:50:05.242573 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-01-16 14:50:05.242589 | orchestrator | Thursday 16 January 2025 14:50:04 +0000 (0:00:00.678) 0:02:21.259 ****** 2025-01-16 14:50:05.242618 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:05.242769 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:05.242788 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:05.242801 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:05.242814 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:05.242826 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:05.242839 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:05.242851 | orchestrator | 2025-01-16 14:50:05.242869 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-01-16 14:50:05.243070 | orchestrator | Thursday 16 January 2025 14:50:05 +0000 (0:00:00.623) 0:02:21.883 ****** 2025-01-16 14:50:05.289044 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:05.311115 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:05.332378 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:05.351456 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:05.371489 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:05.411411 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:05.411668 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:05.411692 | orchestrator | 2025-01-16 14:50:05.411716 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-01-16 14:50:05.411841 | orchestrator | Thursday 16 January 2025 14:50:05 +0000 (0:00:00.169) 0:02:22.052 ****** 2025-01-16 14:50:05.464269 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:05.484941 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:05.506479 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:05.526838 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:05.549564 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:05.592103 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:05.592341 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:05.592359 | orchestrator | 2025-01-16 14:50:05.592374 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-01-16 14:50:05.592554 | orchestrator | Thursday 16 January 2025 14:50:05 +0000 (0:00:00.180) 0:02:22.233 ****** 2025-01-16 14:50:05.656701 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:05.674348 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:05.697401 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:05.724052 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:05.767593 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:05.768220 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:05.768311 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:05.768344 | orchestrator | 2025-01-16 14:50:08.178349 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-01-16 14:50:08.178512 | orchestrator | Thursday 16 January 2025 14:50:05 +0000 (0:00:00.174) 0:02:22.408 ****** 2025-01-16 14:50:08.178539 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:08.179241 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:08.179277 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:08.179295 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:08.179305 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:08.179320 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:08.421543 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:08.421644 | orchestrator | 2025-01-16 14:50:08.421654 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-01-16 14:50:08.421662 | orchestrator | Thursday 16 January 2025 14:50:08 +0000 (0:00:02.410) 0:02:24.819 ****** 2025-01-16 14:50:08.421677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:50:08.421881 | orchestrator | 2025-01-16 14:50:08.422007 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-01-16 14:50:08.422152 | orchestrator | Thursday 16 January 2025 14:50:08 +0000 (0:00:00.243) 0:02:25.062 ****** 2025-01-16 14:50:08.447907 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-01-16 14:50:08.471334 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-01-16 14:50:08.501870 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-01-16 14:50:08.523074 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-01-16 14:50:08.523177 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:50:08.523192 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-01-16 14:50:08.523205 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-01-16 14:50:08.523230 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:50:08.529415 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-01-16 14:50:08.546882 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-01-16 14:50:08.547020 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:50:08.571055 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-01-16 14:50:08.571148 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-01-16 14:50:08.614777 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:50:08.614960 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-01-16 14:50:08.614987 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-01-16 14:50:08.615020 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:50:08.864081 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:50:08.864194 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-01-16 14:50:08.864209 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-01-16 14:50:08.864219 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:50:08.864229 | orchestrator | 2025-01-16 14:50:08.864238 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-01-16 14:50:08.864247 | orchestrator | Thursday 16 January 2025 14:50:08 +0000 (0:00:00.192) 0:02:25.255 ****** 2025-01-16 14:50:08.864271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:50:08.880817 | orchestrator | 2025-01-16 14:50:08.880934 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-01-16 14:50:08.880956 | orchestrator | Thursday 16 January 2025 14:50:08 +0000 (0:00:00.243) 0:02:25.498 ****** 2025-01-16 14:50:08.880983 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-01-16 14:50:08.903681 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-01-16 14:50:08.926294 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:50:08.947593 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-01-16 14:50:08.947715 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:50:08.976141 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-01-16 14:50:08.976275 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:50:09.055256 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-01-16 14:50:09.055374 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:50:09.096193 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-01-16 14:50:09.096291 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:50:09.096371 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:50:09.096381 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-01-16 14:50:09.096389 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:50:09.096794 | orchestrator | 2025-01-16 14:50:09.096859 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-01-16 14:50:09.097055 | orchestrator | Thursday 16 January 2025 14:50:09 +0000 (0:00:00.238) 0:02:25.737 ****** 2025-01-16 14:50:09.343951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:50:27.654984 | orchestrator | 2025-01-16 14:50:27.655124 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-01-16 14:50:27.655146 | orchestrator | Thursday 16 January 2025 14:50:09 +0000 (0:00:00.246) 0:02:25.983 ****** 2025-01-16 14:50:27.655181 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:32.588523 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:32.588710 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:32.588732 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:32.588745 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:32.588756 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:32.588798 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:32.588811 | orchestrator | 2025-01-16 14:50:32.588825 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-01-16 14:50:32.588838 | orchestrator | Thursday 16 January 2025 14:50:27 +0000 (0:00:18.306) 0:02:44.290 ****** 2025-01-16 14:50:32.588868 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:37.125335 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:37.125443 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:37.125452 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:37.125509 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:37.125516 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:37.125522 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:37.125562 | orchestrator | 2025-01-16 14:50:37.125570 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-01-16 14:50:37.125578 | orchestrator | Thursday 16 January 2025 14:50:32 +0000 (0:00:04.938) 0:02:49.228 ****** 2025-01-16 14:50:37.125596 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:38.182970 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:38.183949 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:38.183987 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:38.184001 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:38.184015 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:38.184027 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:38.184040 | orchestrator | 2025-01-16 14:50:38.184056 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-01-16 14:50:38.184070 | orchestrator | Thursday 16 January 2025 14:50:37 +0000 (0:00:04.535) 0:02:53.764 ****** 2025-01-16 14:50:38.184099 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:38.185404 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:38.185455 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:38.185480 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:38.185550 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:38.185789 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:38.185936 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:38.185962 | orchestrator | 2025-01-16 14:50:38.186391 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-01-16 14:50:38.186463 | orchestrator | Thursday 16 January 2025 14:50:38 +0000 (0:00:01.059) 0:02:54.823 ****** 2025-01-16 14:50:41.668110 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:41.668297 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:41.668388 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:41.668405 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:41.668427 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:41.668576 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:41.668602 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:41.668741 | orchestrator | 2025-01-16 14:50:41.669200 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-01-16 14:50:41.923739 | orchestrator | Thursday 16 January 2025 14:50:41 +0000 (0:00:03.483) 0:02:58.307 ****** 2025-01-16 14:50:41.923865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:50:42.357849 | orchestrator | 2025-01-16 14:50:42.358091 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-01-16 14:50:42.358119 | orchestrator | Thursday 16 January 2025 14:50:41 +0000 (0:00:00.257) 0:02:58.564 ****** 2025-01-16 14:50:42.358150 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:42.358327 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:42.358352 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:42.358365 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:42.358383 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:42.358739 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:42.359047 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:42.359285 | orchestrator | 2025-01-16 14:50:42.359313 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-01-16 14:50:42.359527 | orchestrator | Thursday 16 January 2025 14:50:42 +0000 (0:00:00.433) 0:02:58.998 ****** 2025-01-16 14:50:43.334516 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:43.335584 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:43.335889 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:43.335939 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:43.336064 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:43.336088 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:43.336105 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:43.336122 | orchestrator | 2025-01-16 14:50:43.336141 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-01-16 14:50:43.336166 | orchestrator | Thursday 16 January 2025 14:50:43 +0000 (0:00:00.975) 0:02:59.974 ****** 2025-01-16 14:50:43.793596 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:43.793777 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:43.793788 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:43.793798 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:43.793913 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:43.794040 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:43.794231 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:43.794456 | orchestrator | 2025-01-16 14:50:43.794694 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-01-16 14:50:43.794829 | orchestrator | Thursday 16 January 2025 14:50:43 +0000 (0:00:00.460) 0:03:00.434 ****** 2025-01-16 14:50:43.842246 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:50:43.862926 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:50:43.882245 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:50:43.903823 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:50:43.923921 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:50:43.958235 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:50:43.958410 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:50:43.958422 | orchestrator | 2025-01-16 14:50:43.958428 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-01-16 14:50:43.958457 | orchestrator | Thursday 16 January 2025 14:50:43 +0000 (0:00:00.164) 0:03:00.598 ****** 2025-01-16 14:50:43.996915 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:50:44.017404 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:50:44.036314 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:50:44.054160 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:50:44.073188 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:50:44.184011 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:50:44.184097 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:50:44.184119 | orchestrator | 2025-01-16 14:50:44.184127 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-01-16 14:50:44.184146 | orchestrator | Thursday 16 January 2025 14:50:44 +0000 (0:00:00.225) 0:03:00.824 ****** 2025-01-16 14:50:44.225925 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:44.247003 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:44.267216 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:44.297234 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:44.317285 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:44.358875 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:44.358960 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:44.358969 | orchestrator | 2025-01-16 14:50:44.358978 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-01-16 14:50:44.359048 | orchestrator | Thursday 16 January 2025 14:50:44 +0000 (0:00:00.174) 0:03:00.999 ****** 2025-01-16 14:50:44.404189 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:50:44.426857 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:50:44.447775 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:50:44.467937 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:50:44.487225 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:50:44.527278 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:50:44.576087 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:50:44.576189 | orchestrator | 2025-01-16 14:50:44.576202 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-01-16 14:50:44.576212 | orchestrator | Thursday 16 January 2025 14:50:44 +0000 (0:00:00.168) 0:03:01.167 ****** 2025-01-16 14:50:44.576237 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:44.673137 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:44.699249 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:44.716189 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:44.758643 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:44.758805 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:44.758826 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:44.758938 | orchestrator | 2025-01-16 14:50:44.759188 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-01-16 14:50:44.759356 | orchestrator | Thursday 16 January 2025 14:50:44 +0000 (0:00:00.231) 0:03:01.399 ****** 2025-01-16 14:50:44.798968 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:50:44.819087 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:50:44.838748 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:50:44.858766 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:50:44.878622 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:50:44.915354 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:50:44.915446 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:50:44.915457 | orchestrator | 2025-01-16 14:50:44.915468 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-01-16 14:50:44.915768 | orchestrator | Thursday 16 January 2025 14:50:44 +0000 (0:00:00.157) 0:03:01.556 ****** 2025-01-16 14:50:44.967013 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:50:44.988640 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:50:45.010420 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:50:45.031500 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:50:45.050296 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:50:45.084091 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:50:45.084437 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:50:45.084449 | orchestrator | 2025-01-16 14:50:45.084457 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-01-16 14:50:45.087832 | orchestrator | Thursday 16 January 2025 14:50:45 +0000 (0:00:00.168) 0:03:01.725 ****** 2025-01-16 14:50:45.337578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:50:45.337787 | orchestrator | 2025-01-16 14:50:45.337813 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-01-16 14:50:45.337839 | orchestrator | Thursday 16 January 2025 14:50:45 +0000 (0:00:00.252) 0:03:01.978 ****** 2025-01-16 14:50:45.829316 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:45.830589 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:45.830631 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:45.830738 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:45.830808 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:45.831241 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:45.831309 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:45.831639 | orchestrator | 2025-01-16 14:50:45.831806 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-01-16 14:50:45.832135 | orchestrator | Thursday 16 January 2025 14:50:45 +0000 (0:00:00.491) 0:03:02.469 ****** 2025-01-16 14:50:47.603970 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:50:47.604127 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:50:47.604144 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:47.604459 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:50:47.604844 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:50:47.605154 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:50:47.605585 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:50:47.605780 | orchestrator | 2025-01-16 14:50:47.606151 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-01-16 14:50:47.606293 | orchestrator | Thursday 16 January 2025 14:50:47 +0000 (0:00:01.775) 0:03:04.244 ****** 2025-01-16 14:50:47.653043 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-01-16 14:50:47.656371 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-01-16 14:50:47.656455 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-01-16 14:50:47.697896 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:50:47.698092 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-01-16 14:50:47.698219 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-01-16 14:50:47.698645 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-01-16 14:50:47.742585 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:50:47.742840 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-01-16 14:50:47.742894 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-01-16 14:50:47.742978 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-01-16 14:50:47.787385 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:50:47.787518 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-01-16 14:50:47.787836 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-01-16 14:50:47.787992 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-01-16 14:50:47.920993 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:50:47.921283 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-01-16 14:50:47.921307 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-01-16 14:50:47.921320 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-01-16 14:50:47.965453 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:50:47.965708 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-01-16 14:50:47.965970 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-01-16 14:50:47.966004 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-01-16 14:50:48.047638 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:50:48.047975 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-01-16 14:50:48.048005 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-01-16 14:50:48.048025 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-01-16 14:50:48.048339 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:50:48.048365 | orchestrator | 2025-01-16 14:50:48.048777 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-01-16 14:50:51.631389 | orchestrator | Thursday 16 January 2025 14:50:48 +0000 (0:00:00.443) 0:03:04.688 ****** 2025-01-16 14:50:51.631508 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:51.632128 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:51.632168 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:51.632750 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:51.632774 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:51.632783 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:51.632797 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:51.632926 | orchestrator | 2025-01-16 14:50:51.633003 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-01-16 14:50:51.633285 | orchestrator | Thursday 16 January 2025 14:50:51 +0000 (0:00:03.583) 0:03:08.271 ****** 2025-01-16 14:50:52.345782 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:52.345998 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:52.346271 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:52.346291 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:52.346315 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:52.346415 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:52.346439 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:52.346643 | orchestrator | 2025-01-16 14:50:52.346749 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-01-16 14:50:56.467215 | orchestrator | Thursday 16 January 2025 14:50:52 +0000 (0:00:00.714) 0:03:08.985 ****** 2025-01-16 14:50:56.467376 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:56.467854 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:56.467888 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:56.467904 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:56.467920 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:56.467942 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:58.190278 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:58.190552 | orchestrator | 2025-01-16 14:50:58.190631 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-01-16 14:50:58.190692 | orchestrator | Thursday 16 January 2025 14:50:56 +0000 (0:00:04.121) 0:03:13.107 ****** 2025-01-16 14:50:58.190743 | orchestrator | changed: [testbed-manager] 2025-01-16 14:50:58.190913 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:58.191009 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:58.191039 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:58.191073 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:59.080545 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:59.080702 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:59.080725 | orchestrator | 2025-01-16 14:50:59.080743 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-01-16 14:50:59.080759 | orchestrator | Thursday 16 January 2025 14:50:58 +0000 (0:00:01.722) 0:03:14.829 ****** 2025-01-16 14:50:59.080790 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:59.081119 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:59.081147 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:59.081167 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:59.083085 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:59.083155 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:59.083459 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:59.083504 | orchestrator | 2025-01-16 14:50:59.084113 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-01-16 14:50:59.084180 | orchestrator | Thursday 16 January 2025 14:50:59 +0000 (0:00:00.889) 0:03:15.719 ****** 2025-01-16 14:50:59.848211 | orchestrator | ok: [testbed-manager] 2025-01-16 14:50:59.848451 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:50:59.848512 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:50:59.848528 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:50:59.848543 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:50:59.848557 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:50:59.848571 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:50:59.848593 | orchestrator | 2025-01-16 14:50:59.848646 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-01-16 14:50:59.848707 | orchestrator | Thursday 16 January 2025 14:50:59 +0000 (0:00:00.767) 0:03:16.486 ****** 2025-01-16 14:50:59.983154 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:00.028272 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:00.073030 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:00.115197 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:00.237034 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:00.237158 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:00.237169 | orchestrator | changed: [testbed-manager] 2025-01-16 14:51:00.237179 | orchestrator | 2025-01-16 14:51:00.237776 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-01-16 14:51:00.237880 | orchestrator | Thursday 16 January 2025 14:51:00 +0000 (0:00:00.391) 0:03:16.878 ****** 2025-01-16 14:51:06.537384 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:06.537776 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:06.537818 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:06.537834 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:06.537886 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:06.537901 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:06.537924 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:07.122462 | orchestrator | 2025-01-16 14:51:07.122903 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-01-16 14:51:07.122956 | orchestrator | Thursday 16 January 2025 14:51:06 +0000 (0:00:06.297) 0:03:23.176 ****** 2025-01-16 14:51:07.123008 | orchestrator | changed: [testbed-manager] 2025-01-16 14:51:07.123206 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:07.123232 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:07.123248 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:07.123295 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:07.123999 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:07.124040 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:07.124055 | orchestrator | 2025-01-16 14:51:07.124079 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-01-16 14:51:15.275864 | orchestrator | Thursday 16 January 2025 14:51:07 +0000 (0:00:00.587) 0:03:23.763 ****** 2025-01-16 14:51:15.275959 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:15.276641 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:15.276663 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:15.278217 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:15.278376 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:15.278409 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:15.278577 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:15.278904 | orchestrator | 2025-01-16 14:51:15.279007 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-01-16 14:51:15.279026 | orchestrator | Thursday 16 January 2025 14:51:15 +0000 (0:00:08.148) 0:03:31.911 ****** 2025-01-16 14:51:23.166143 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:23.506160 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:23.506248 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:23.506255 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:23.506261 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:23.506267 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:23.506273 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:23.506278 | orchestrator | 2025-01-16 14:51:23.506286 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-01-16 14:51:23.506293 | orchestrator | Thursday 16 January 2025 14:51:23 +0000 (0:00:07.886) 0:03:39.798 ****** 2025-01-16 14:51:23.506310 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-01-16 14:51:23.554108 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-01-16 14:51:24.063894 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-01-16 14:51:24.064141 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-01-16 14:51:24.064183 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-01-16 14:51:24.064264 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-01-16 14:51:24.064361 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-01-16 14:51:24.064397 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-01-16 14:51:24.064563 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-01-16 14:51:24.064935 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-01-16 14:51:24.065997 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-01-16 14:51:24.066140 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-01-16 14:51:24.066154 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-01-16 14:51:24.066374 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-01-16 14:51:24.066599 | orchestrator | 2025-01-16 14:51:24.066800 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-01-16 14:51:24.067060 | orchestrator | Thursday 16 January 2025 14:51:24 +0000 (0:00:00.906) 0:03:40.704 ****** 2025-01-16 14:51:24.150205 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:24.195946 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:24.244346 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:24.287637 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:24.336361 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:24.408518 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:24.408659 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:24.408674 | orchestrator | 2025-01-16 14:51:24.408867 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-01-16 14:51:24.409619 | orchestrator | Thursday 16 January 2025 14:51:24 +0000 (0:00:00.343) 0:03:41.048 ****** 2025-01-16 14:51:26.870262 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:26.870416 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:26.870435 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:26.870829 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:26.870886 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:26.870897 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:26.870908 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:26.871125 | orchestrator | 2025-01-16 14:51:26.871397 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-01-16 14:51:26.871462 | orchestrator | Thursday 16 January 2025 14:51:26 +0000 (0:00:02.461) 0:03:43.509 ****** 2025-01-16 14:51:26.957035 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:26.998666 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:27.045124 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:27.086248 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:27.127302 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:27.189554 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:27.189755 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:27.189777 | orchestrator | 2025-01-16 14:51:27.189796 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-01-16 14:51:27.189928 | orchestrator | Thursday 16 January 2025 14:51:27 +0000 (0:00:00.321) 0:03:43.831 ****** 2025-01-16 14:51:27.235640 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-01-16 14:51:27.281389 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-01-16 14:51:27.281483 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:27.329912 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-01-16 14:51:27.330057 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-01-16 14:51:27.330089 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:27.373836 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-01-16 14:51:27.373949 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-01-16 14:51:27.373984 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:27.416823 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-01-16 14:51:27.416931 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-01-16 14:51:27.416958 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:27.417015 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-01-16 14:51:27.417029 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-01-16 14:51:27.463399 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:27.532393 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-01-16 14:51:27.532492 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-01-16 14:51:27.532513 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:27.532629 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-01-16 14:51:27.532644 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-01-16 14:51:27.532931 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:27.532954 | orchestrator | 2025-01-16 14:51:27.533022 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-01-16 14:51:27.533186 | orchestrator | Thursday 16 January 2025 14:51:27 +0000 (0:00:00.341) 0:03:44.172 ****** 2025-01-16 14:51:27.621108 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:27.663508 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:27.704418 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:27.748569 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:27.789521 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:27.849020 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:27.849355 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:27.849385 | orchestrator | 2025-01-16 14:51:27.849407 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-01-16 14:51:27.849477 | orchestrator | Thursday 16 January 2025 14:51:27 +0000 (0:00:00.315) 0:03:44.488 ****** 2025-01-16 14:51:27.937550 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:27.978453 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:28.020838 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:28.061297 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:28.104202 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:28.166790 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:28.166989 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:28.167015 | orchestrator | 2025-01-16 14:51:28.167031 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-01-16 14:51:28.167054 | orchestrator | Thursday 16 January 2025 14:51:28 +0000 (0:00:00.319) 0:03:44.807 ****** 2025-01-16 14:51:28.252869 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:28.297308 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:28.436470 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:28.482193 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:28.526570 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:28.604093 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:28.604245 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:28.604263 | orchestrator | 2025-01-16 14:51:28.604274 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-01-16 14:51:28.604292 | orchestrator | Thursday 16 January 2025 14:51:28 +0000 (0:00:00.436) 0:03:45.244 ****** 2025-01-16 14:51:32.109530 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:32.110420 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:32.110452 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:32.110459 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:32.110473 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:32.110501 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:32.110508 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:32.110515 | orchestrator | 2025-01-16 14:51:32.110522 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-01-16 14:51:32.110532 | orchestrator | Thursday 16 January 2025 14:51:32 +0000 (0:00:03.503) 0:03:48.748 ****** 2025-01-16 14:51:32.646441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:51:32.914502 | orchestrator | 2025-01-16 14:51:32.914606 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-01-16 14:51:32.914620 | orchestrator | Thursday 16 January 2025 14:51:32 +0000 (0:00:00.539) 0:03:49.287 ****** 2025-01-16 14:51:32.914643 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:33.157873 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:33.158304 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:33.158357 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:33.158372 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:33.158386 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:33.158409 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:33.159691 | orchestrator | 2025-01-16 14:51:33.159772 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-01-16 14:51:33.446650 | orchestrator | Thursday 16 January 2025 14:51:33 +0000 (0:00:00.508) 0:03:49.796 ****** 2025-01-16 14:51:33.446932 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:33.787399 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:33.787576 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:33.787591 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:33.787600 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:33.787615 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:33.788292 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:33.788460 | orchestrator | 2025-01-16 14:51:33.788493 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-01-16 14:51:33.788523 | orchestrator | Thursday 16 January 2025 14:51:33 +0000 (0:00:00.631) 0:03:50.428 ****** 2025-01-16 14:51:34.572653 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:34.576137 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:34.654213 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:34.654328 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:34.654343 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:34.654354 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:34.654364 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:34.654374 | orchestrator | 2025-01-16 14:51:34.654385 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-01-16 14:51:34.654396 | orchestrator | Thursday 16 January 2025 14:51:34 +0000 (0:00:00.782) 0:03:51.211 ****** 2025-01-16 14:51:34.654423 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:35.481930 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:35.482355 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:35.482411 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:35.483519 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:35.483557 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:35.483572 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:35.483586 | orchestrator | 2025-01-16 14:51:35.483602 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-01-16 14:51:35.483625 | orchestrator | Thursday 16 January 2025 14:51:35 +0000 (0:00:00.908) 0:03:52.119 ****** 2025-01-16 14:51:36.306646 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:37.261911 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:37.262999 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:37.263042 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:37.263059 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:37.263073 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:37.263087 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:37.263101 | orchestrator | 2025-01-16 14:51:37.263153 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-01-16 14:51:37.263170 | orchestrator | Thursday 16 January 2025 14:51:36 +0000 (0:00:00.826) 0:03:52.945 ****** 2025-01-16 14:51:37.263202 | orchestrator | changed: [testbed-manager] 2025-01-16 14:51:37.263302 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:37.263322 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:37.263337 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:37.263351 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:37.263364 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:37.263383 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:37.263440 | orchestrator | 2025-01-16 14:51:37.263526 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-01-16 14:51:37.263794 | orchestrator | Thursday 16 January 2025 14:51:37 +0000 (0:00:00.955) 0:03:53.901 ****** 2025-01-16 14:51:37.848206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:51:38.712810 | orchestrator | 2025-01-16 14:51:38.712925 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-01-16 14:51:38.712936 | orchestrator | Thursday 16 January 2025 14:51:37 +0000 (0:00:00.586) 0:03:54.487 ****** 2025-01-16 14:51:38.712970 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:38.713001 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:38.713009 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:38.713125 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:38.713429 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:38.714215 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:38.714300 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:38.714310 | orchestrator | 2025-01-16 14:51:38.714318 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-01-16 14:51:38.714591 | orchestrator | Thursday 16 January 2025 14:51:38 +0000 (0:00:00.864) 0:03:55.351 ****** 2025-01-16 14:51:39.441105 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:39.441287 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:39.441379 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:39.441398 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:39.441692 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:39.442616 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:39.442798 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:39.442822 | orchestrator | 2025-01-16 14:51:39.442842 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-01-16 14:51:39.443092 | orchestrator | Thursday 16 January 2025 14:51:39 +0000 (0:00:00.730) 0:03:56.081 ****** 2025-01-16 14:51:40.286921 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:40.287204 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:40.287234 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:40.287258 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:40.288585 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:40.288612 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:40.288631 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:40.288873 | orchestrator | 2025-01-16 14:51:40.289254 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-01-16 14:51:40.289644 | orchestrator | Thursday 16 January 2025 14:51:40 +0000 (0:00:00.844) 0:03:56.926 ****** 2025-01-16 14:51:41.028490 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:41.028697 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:41.028779 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:41.028796 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:41.028852 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:41.028948 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:41.028966 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:41.029012 | orchestrator | 2025-01-16 14:51:41.029063 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-01-16 14:51:41.029085 | orchestrator | Thursday 16 January 2025 14:51:41 +0000 (0:00:00.742) 0:03:57.669 ****** 2025-01-16 14:51:41.886351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:51:41.888059 | orchestrator | 2025-01-16 14:51:41.888148 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-01-16 14:51:41.888175 | orchestrator | Thursday 16 January 2025 14:51:41 +0000 (0:00:00.568) 0:03:58.237 ****** 2025-01-16 14:51:41.888230 | orchestrator | 2025-01-16 14:51:41.888248 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-01-16 14:51:41.888896 | orchestrator | Thursday 16 January 2025 14:51:41 +0000 (0:00:00.026) 0:03:58.264 ****** 2025-01-16 14:51:41.889082 | orchestrator | 2025-01-16 14:51:41.889178 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-01-16 14:51:41.889386 | orchestrator | Thursday 16 January 2025 14:51:41 +0000 (0:00:00.029) 0:03:58.294 ****** 2025-01-16 14:51:41.889578 | orchestrator | 2025-01-16 14:51:41.889696 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-01-16 14:51:41.890000 | orchestrator | Thursday 16 January 2025 14:51:41 +0000 (0:00:00.032) 0:03:58.327 ****** 2025-01-16 14:51:41.890145 | orchestrator | 2025-01-16 14:51:41.890477 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-01-16 14:51:41.890634 | orchestrator | Thursday 16 January 2025 14:51:41 +0000 (0:00:00.028) 0:03:58.355 ****** 2025-01-16 14:51:41.890831 | orchestrator | 2025-01-16 14:51:41.890924 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-01-16 14:51:41.891085 | orchestrator | Thursday 16 January 2025 14:51:41 +0000 (0:00:00.027) 0:03:58.383 ****** 2025-01-16 14:51:41.891267 | orchestrator | 2025-01-16 14:51:41.891369 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-01-16 14:51:41.891514 | orchestrator | Thursday 16 January 2025 14:51:41 +0000 (0:00:00.114) 0:03:58.497 ****** 2025-01-16 14:51:41.891691 | orchestrator | 2025-01-16 14:51:41.891842 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-01-16 14:51:41.891964 | orchestrator | Thursday 16 January 2025 14:51:41 +0000 (0:00:00.028) 0:03:58.525 ****** 2025-01-16 14:51:42.560272 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:42.560804 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:42.560840 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:42.560864 | orchestrator | 2025-01-16 14:51:42.561190 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-01-16 14:51:43.376578 | orchestrator | Thursday 16 January 2025 14:51:42 +0000 (0:00:00.671) 0:03:59.197 ****** 2025-01-16 14:51:43.376918 | orchestrator | changed: [testbed-manager] 2025-01-16 14:51:43.377135 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:43.377179 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:43.377244 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:43.377266 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:43.377282 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:43.377304 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:43.377443 | orchestrator | 2025-01-16 14:51:43.377639 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-01-16 14:51:43.378314 | orchestrator | Thursday 16 January 2025 14:51:43 +0000 (0:00:00.818) 0:04:00.015 ****** 2025-01-16 14:51:44.090964 | orchestrator | changed: [testbed-manager] 2025-01-16 14:51:44.091101 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:44.091116 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:44.091123 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:44.091134 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:44.091254 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:44.091270 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:44.091389 | orchestrator | 2025-01-16 14:51:44.091601 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-01-16 14:51:44.091686 | orchestrator | Thursday 16 January 2025 14:51:44 +0000 (0:00:00.715) 0:04:00.731 ****** 2025-01-16 14:51:44.177309 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:45.534990 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:45.535796 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:45.535859 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:45.535873 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:45.535893 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:45.535941 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:45.535953 | orchestrator | 2025-01-16 14:51:45.535967 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-01-16 14:51:45.536478 | orchestrator | Thursday 16 January 2025 14:51:45 +0000 (0:00:01.441) 0:04:02.173 ****** 2025-01-16 14:51:45.596249 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:45.596377 | orchestrator | 2025-01-16 14:51:45.596394 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-01-16 14:51:45.596409 | orchestrator | Thursday 16 January 2025 14:51:45 +0000 (0:00:00.063) 0:04:02.237 ****** 2025-01-16 14:51:46.336205 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:46.336825 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:46.336874 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:46.336884 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:46.336899 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:46.337015 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:46.337196 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:46.337213 | orchestrator | 2025-01-16 14:51:46.337347 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-01-16 14:51:46.337602 | orchestrator | Thursday 16 January 2025 14:51:46 +0000 (0:00:00.738) 0:04:02.975 ****** 2025-01-16 14:51:46.427001 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:46.472109 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:46.513925 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:46.569168 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:46.607328 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:46.679840 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:47.265043 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:47.265237 | orchestrator | 2025-01-16 14:51:47.265275 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-01-16 14:51:47.265333 | orchestrator | Thursday 16 January 2025 14:51:46 +0000 (0:00:00.344) 0:04:03.320 ****** 2025-01-16 14:51:47.265381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:51:47.543623 | orchestrator | 2025-01-16 14:51:47.544379 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-01-16 14:51:47.544401 | orchestrator | Thursday 16 January 2025 14:51:47 +0000 (0:00:00.586) 0:04:03.906 ****** 2025-01-16 14:51:47.544419 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:47.811523 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:47.812680 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:47.812755 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:47.812772 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:47.812796 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:47.813371 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:49.365665 | orchestrator | 2025-01-16 14:51:49.365878 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-01-16 14:51:49.365902 | orchestrator | Thursday 16 January 2025 14:51:47 +0000 (0:00:00.544) 0:04:04.450 ****** 2025-01-16 14:51:49.365935 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-01-16 14:51:49.366109 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-01-16 14:51:49.366134 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-01-16 14:51:49.366148 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-01-16 14:51:49.366177 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-01-16 14:51:49.366989 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-01-16 14:51:49.367037 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-01-16 14:51:49.367121 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-01-16 14:51:49.367195 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-01-16 14:51:49.367216 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-01-16 14:51:49.367451 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-01-16 14:51:49.367672 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-01-16 14:51:49.367868 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-01-16 14:51:49.368041 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-01-16 14:51:49.368393 | orchestrator | 2025-01-16 14:51:49.368651 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-01-16 14:51:49.368685 | orchestrator | Thursday 16 January 2025 14:51:49 +0000 (0:00:01.554) 0:04:06.005 ****** 2025-01-16 14:51:49.451234 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:49.493081 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:49.537523 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:49.580350 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:49.618084 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:49.671538 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:49.671789 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:49.671819 | orchestrator | 2025-01-16 14:51:49.671843 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-01-16 14:51:50.181264 | orchestrator | Thursday 16 January 2025 14:51:49 +0000 (0:00:00.306) 0:04:06.311 ****** 2025-01-16 14:51:50.181405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:51:50.437364 | orchestrator | 2025-01-16 14:51:50.437420 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-01-16 14:51:50.437436 | orchestrator | Thursday 16 January 2025 14:51:50 +0000 (0:00:00.511) 0:04:06.822 ****** 2025-01-16 14:51:50.437461 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:50.483738 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:50.805947 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:50.806141 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:50.806258 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:50.806277 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:50.806314 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:50.806334 | orchestrator | 2025-01-16 14:51:50.806380 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-01-16 14:51:50.806781 | orchestrator | Thursday 16 January 2025 14:51:50 +0000 (0:00:00.622) 0:04:07.445 ****** 2025-01-16 14:51:51.084108 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:51.322390 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:51.322559 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:51.322585 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:51.322596 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:51.322606 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:51.322616 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:51.322626 | orchestrator | 2025-01-16 14:51:51.322641 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-01-16 14:51:51.402792 | orchestrator | Thursday 16 January 2025 14:51:51 +0000 (0:00:00.517) 0:04:07.963 ****** 2025-01-16 14:51:51.402930 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:51.443266 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:51.483899 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:51.525606 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:51.569441 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:51.625238 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:51.625448 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:51.625473 | orchestrator | 2025-01-16 14:51:51.625486 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-01-16 14:51:51.625506 | orchestrator | Thursday 16 January 2025 14:51:51 +0000 (0:00:00.302) 0:04:08.265 ****** 2025-01-16 14:51:52.479809 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:52.480711 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:52.480765 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:52.480782 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:52.481026 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:52.481045 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:52.481297 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:52.481565 | orchestrator | 2025-01-16 14:51:52.481789 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-01-16 14:51:52.482698 | orchestrator | Thursday 16 January 2025 14:51:52 +0000 (0:00:00.853) 0:04:09.119 ****** 2025-01-16 14:51:52.571443 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:52.612554 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:52.657058 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:52.700611 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:52.739247 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:52.888161 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:52.888406 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:52.888429 | orchestrator | 2025-01-16 14:51:52.888451 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-01-16 14:51:52.888536 | orchestrator | Thursday 16 January 2025 14:51:52 +0000 (0:00:00.409) 0:04:09.528 ****** 2025-01-16 14:51:53.929072 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:53.929211 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:53.929222 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:53.929228 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:53.929234 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:53.929239 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:53.929248 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:53.929541 | orchestrator | 2025-01-16 14:51:53.929570 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-01-16 14:51:53.929644 | orchestrator | Thursday 16 January 2025 14:51:53 +0000 (0:00:01.035) 0:04:10.564 ****** 2025-01-16 14:51:54.725341 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:55.878154 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:55.878263 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:55.878272 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:55.878278 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:55.878283 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:55.878289 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:55.878294 | orchestrator | 2025-01-16 14:51:55.878302 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-01-16 14:51:55.878310 | orchestrator | Thursday 16 January 2025 14:51:54 +0000 (0:00:00.799) 0:04:11.363 ****** 2025-01-16 14:51:55.878325 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:55.878352 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:55.878357 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:55.878362 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:55.878367 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:55.878372 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:55.878377 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:55.878384 | orchestrator | 2025-01-16 14:51:55.880999 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-01-16 14:51:55.881107 | orchestrator | Thursday 16 January 2025 14:51:55 +0000 (0:00:01.154) 0:04:12.517 ****** 2025-01-16 14:51:57.027564 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:57.306913 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:51:57.307023 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:51:57.307057 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:51:57.307066 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:51:57.307074 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:51:57.307083 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:51:57.307092 | orchestrator | 2025-01-16 14:51:57.307102 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-01-16 14:51:57.307113 | orchestrator | Thursday 16 January 2025 14:51:57 +0000 (0:00:01.146) 0:04:13.664 ****** 2025-01-16 14:51:57.307135 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:57.544630 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:57.544879 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:57.544920 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:57.545337 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:57.545436 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:57.545469 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:57.545479 | orchestrator | 2025-01-16 14:51:57.545555 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-01-16 14:51:57.545783 | orchestrator | Thursday 16 January 2025 14:51:57 +0000 (0:00:00.522) 0:04:14.186 ****** 2025-01-16 14:51:57.627846 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:57.669714 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:57.707782 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:57.748175 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:57.791476 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:58.067154 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:58.067321 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:58.067347 | orchestrator | 2025-01-16 14:51:58.067609 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-01-16 14:51:58.067766 | orchestrator | Thursday 16 January 2025 14:51:58 +0000 (0:00:00.521) 0:04:14.707 ****** 2025-01-16 14:51:58.155583 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:51:58.198611 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:51:58.246152 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:51:58.288047 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:51:58.331381 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:51:58.395876 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:51:58.395978 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:51:58.395989 | orchestrator | 2025-01-16 14:51:58.396088 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-01-16 14:51:58.396210 | orchestrator | Thursday 16 January 2025 14:51:58 +0000 (0:00:00.329) 0:04:15.037 ****** 2025-01-16 14:51:58.482253 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:58.624314 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:58.666847 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:58.708428 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:58.754285 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:58.817241 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:58.817387 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:58.817409 | orchestrator | 2025-01-16 14:51:58.817660 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-01-16 14:51:58.817934 | orchestrator | Thursday 16 January 2025 14:51:58 +0000 (0:00:00.420) 0:04:15.457 ****** 2025-01-16 14:51:58.908747 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:58.950845 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:58.991908 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:59.038082 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:59.082089 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:59.144259 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:59.145057 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:59.145196 | orchestrator | 2025-01-16 14:51:59.145298 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-01-16 14:51:59.145327 | orchestrator | Thursday 16 January 2025 14:51:59 +0000 (0:00:00.326) 0:04:15.784 ****** 2025-01-16 14:51:59.229663 | orchestrator | ok: [testbed-manager] 2025-01-16 14:51:59.271815 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:51:59.317966 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:51:59.361130 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:51:59.403009 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:51:59.470650 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:51:59.470852 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:51:59.470878 | orchestrator | 2025-01-16 14:51:59.471076 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-01-16 14:51:59.471322 | orchestrator | Thursday 16 January 2025 14:51:59 +0000 (0:00:00.327) 0:04:16.112 ****** 2025-01-16 14:52:01.976224 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:01.976794 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:01.976853 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:01.976872 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:01.976889 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:01.976907 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:01.976950 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:01.976969 | orchestrator | 2025-01-16 14:52:01.976987 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-01-16 14:52:01.977017 | orchestrator | Thursday 16 January 2025 14:52:01 +0000 (0:00:02.503) 0:04:18.615 ****** 2025-01-16 14:52:02.067373 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:52:02.111917 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:52:02.154940 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:52:02.290583 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:52:02.335172 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:52:02.405626 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:52:02.405845 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:52:02.405871 | orchestrator | 2025-01-16 14:52:02.405903 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-01-16 14:52:02.405969 | orchestrator | Thursday 16 January 2025 14:52:02 +0000 (0:00:00.431) 0:04:19.046 ****** 2025-01-16 14:52:02.929454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:52:04.051224 | orchestrator | 2025-01-16 14:52:04.051369 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-01-16 14:52:04.051407 | orchestrator | Thursday 16 January 2025 14:52:02 +0000 (0:00:00.524) 0:04:19.570 ****** 2025-01-16 14:52:04.051488 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:04.052982 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:04.053020 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:04.053036 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:04.053063 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:04.053182 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:04.053217 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:04.053451 | orchestrator | 2025-01-16 14:52:04.053699 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-01-16 14:52:04.053884 | orchestrator | Thursday 16 January 2025 14:52:04 +0000 (0:00:01.117) 0:04:20.688 ****** 2025-01-16 14:52:04.768194 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:04.768483 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:04.768523 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:04.769932 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:04.770175 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:04.770488 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:04.770898 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:04.771166 | orchestrator | 2025-01-16 14:52:04.771525 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-01-16 14:52:04.771685 | orchestrator | Thursday 16 January 2025 14:52:04 +0000 (0:00:00.720) 0:04:21.408 ****** 2025-01-16 14:52:05.079156 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:05.124232 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:05.173083 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:05.428555 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:05.428847 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:05.428895 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:05.430447 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:06.469055 | orchestrator | 2025-01-16 14:52:06.469265 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-01-16 14:52:06.469313 | orchestrator | Thursday 16 January 2025 14:52:05 +0000 (0:00:00.659) 0:04:22.067 ****** 2025-01-16 14:52:06.469361 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-01-16 14:52:06.469523 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-01-16 14:52:06.469547 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-01-16 14:52:06.469564 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-01-16 14:52:06.469591 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-01-16 14:52:06.469656 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-01-16 14:52:06.469788 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-01-16 14:52:06.469809 | orchestrator | 2025-01-16 14:52:06.469824 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-01-16 14:52:06.469844 | orchestrator | Thursday 16 January 2025 14:52:06 +0000 (0:00:01.039) 0:04:23.107 ****** 2025-01-16 14:52:06.979377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:52:13.057170 | orchestrator | 2025-01-16 14:52:13.057274 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-01-16 14:52:13.057286 | orchestrator | Thursday 16 January 2025 14:52:06 +0000 (0:00:00.511) 0:04:23.619 ****** 2025-01-16 14:52:13.057309 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:52:14.174934 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:52:14.175097 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:52:14.175124 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:14.175177 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:52:14.175190 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:52:14.175202 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:52:14.175214 | orchestrator | 2025-01-16 14:52:14.175228 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-01-16 14:52:14.175241 | orchestrator | Thursday 16 January 2025 14:52:13 +0000 (0:00:06.076) 0:04:29.696 ****** 2025-01-16 14:52:14.175271 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:14.175547 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:14.175570 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:14.175582 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:14.175593 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:14.175611 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:14.175936 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:14.176019 | orchestrator | 2025-01-16 14:52:14.176030 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-01-16 14:52:14.176046 | orchestrator | Thursday 16 January 2025 14:52:14 +0000 (0:00:01.117) 0:04:30.814 ****** 2025-01-16 14:52:15.202096 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:15.203008 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:15.203078 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:15.203113 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:15.203493 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:15.203720 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:15.204160 | orchestrator | 2025-01-16 14:52:15.204559 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-01-16 14:52:15.205079 | orchestrator | Thursday 16 January 2025 14:52:15 +0000 (0:00:01.026) 0:04:31.840 ****** 2025-01-16 14:52:16.005065 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:16.005296 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:52:16.005318 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:52:16.005328 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:52:16.005339 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:52:16.005349 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:52:16.005359 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:52:16.005369 | orchestrator | 2025-01-16 14:52:16.005380 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-01-16 14:52:16.005392 | orchestrator | 2025-01-16 14:52:16.005408 | orchestrator | TASK [Include hardening role] ************************************************** 2025-01-16 14:52:16.091660 | orchestrator | Thursday 16 January 2025 14:52:15 +0000 (0:00:00.804) 0:04:32.645 ****** 2025-01-16 14:52:16.091795 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:52:16.133849 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:52:16.175882 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:52:16.216215 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:52:16.257261 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:52:16.327423 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:52:16.327623 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:52:16.327651 | orchestrator | 2025-01-16 14:52:16.327910 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-01-16 14:52:16.327944 | orchestrator | 2025-01-16 14:52:16.328074 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-01-16 14:52:16.328285 | orchestrator | Thursday 16 January 2025 14:52:16 +0000 (0:00:00.322) 0:04:32.967 ****** 2025-01-16 14:52:17.174257 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:17.174382 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:52:17.174392 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:52:17.174403 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:52:17.174677 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:52:17.174949 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:52:17.175195 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:52:17.175279 | orchestrator | 2025-01-16 14:52:17.175576 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-01-16 14:52:17.175820 | orchestrator | Thursday 16 January 2025 14:52:17 +0000 (0:00:00.845) 0:04:33.813 ****** 2025-01-16 14:52:18.180142 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:18.180539 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:18.180585 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:18.180612 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:18.180637 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:18.180673 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:18.183956 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:18.184045 | orchestrator | 2025-01-16 14:52:18.184085 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-01-16 14:52:18.265798 | orchestrator | Thursday 16 January 2025 14:52:18 +0000 (0:00:01.006) 0:04:34.820 ****** 2025-01-16 14:52:18.265946 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:52:18.306314 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:52:18.348401 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:52:18.392031 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:52:18.432280 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:52:18.696511 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:52:19.528518 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:52:19.528671 | orchestrator | 2025-01-16 14:52:19.529539 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-01-16 14:52:19.529585 | orchestrator | Thursday 16 January 2025 14:52:18 +0000 (0:00:00.516) 0:04:35.336 ****** 2025-01-16 14:52:19.529624 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:19.529787 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:52:19.529810 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:52:19.529825 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:52:19.529839 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:52:19.529853 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:52:19.529867 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:52:19.529887 | orchestrator | 2025-01-16 14:52:19.530408 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-01-16 14:52:20.127420 | orchestrator | 2025-01-16 14:52:20.127546 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-01-16 14:52:20.127566 | orchestrator | Thursday 16 January 2025 14:52:19 +0000 (0:00:00.832) 0:04:36.169 ****** 2025-01-16 14:52:20.127665 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:52:20.127779 | orchestrator | 2025-01-16 14:52:20.127797 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-01-16 14:52:20.402853 | orchestrator | Thursday 16 January 2025 14:52:20 +0000 (0:00:00.598) 0:04:36.768 ****** 2025-01-16 14:52:20.403030 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:20.643713 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:20.643888 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:20.643898 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:20.643907 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:20.643955 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:20.644265 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:20.644449 | orchestrator | 2025-01-16 14:52:20.644701 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-01-16 14:52:20.644862 | orchestrator | Thursday 16 January 2025 14:52:20 +0000 (0:00:00.515) 0:04:37.283 ****** 2025-01-16 14:52:21.361081 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:52:21.363976 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:21.364051 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:52:21.364118 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:52:21.364137 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:52:21.364154 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:52:21.364187 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:52:21.992520 | orchestrator | 2025-01-16 14:52:21.992623 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-01-16 14:52:21.992636 | orchestrator | Thursday 16 January 2025 14:52:21 +0000 (0:00:00.715) 0:04:37.999 ****** 2025-01-16 14:52:21.992659 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:52:22.539140 | orchestrator | 2025-01-16 14:52:22.539272 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-01-16 14:52:22.539288 | orchestrator | Thursday 16 January 2025 14:52:21 +0000 (0:00:00.633) 0:04:38.633 ****** 2025-01-16 14:52:22.539314 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:22.539363 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:22.539430 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:22.539442 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:22.539452 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:22.539461 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:22.539471 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:22.539483 | orchestrator | 2025-01-16 14:52:22.539694 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-01-16 14:52:22.540083 | orchestrator | Thursday 16 January 2025 14:52:22 +0000 (0:00:00.545) 0:04:39.178 ****** 2025-01-16 14:52:22.828452 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:23.293955 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:52:23.294238 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:52:23.294272 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:52:23.294293 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:52:23.294320 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:52:23.294599 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:52:23.295144 | orchestrator | 2025-01-16 14:52:23.295182 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:52:23.295335 | orchestrator | testbed-manager : ok=161  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-01-16 14:52:23.295590 | orchestrator | 2025-01-16 14:52:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:52:23.295628 | orchestrator | 2025-01-16 14:52:23 | INFO  | Please wait and do not abort execution. 2025-01-16 14:52:23.295655 | orchestrator | testbed-node-0 : ok=169  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-01-16 14:52:23.295938 | orchestrator | testbed-node-1 : ok=169  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-01-16 14:52:23.296043 | orchestrator | testbed-node-2 : ok=169  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-01-16 14:52:23.296250 | orchestrator | testbed-node-3 : ok=168  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-01-16 14:52:23.296478 | orchestrator | testbed-node-4 : ok=168  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-01-16 14:52:23.296861 | orchestrator | testbed-node-5 : ok=168  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-01-16 14:52:23.296983 | orchestrator | 2025-01-16 14:52:23.297160 | orchestrator | 2025-01-16 14:52:23.297374 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:52:23.297588 | orchestrator | Thursday 16 January 2025 14:52:23 +0000 (0:00:00.754) 0:04:39.933 ****** 2025-01-16 14:52:23.297904 | orchestrator | =============================================================================== 2025-01-16 14:52:23.297996 | orchestrator | osism.commons.packages : Install required packages --------------------- 36.38s 2025-01-16 14:52:23.298166 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.96s 2025-01-16 14:52:23.298359 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 18.31s 2025-01-16 14:52:23.298565 | orchestrator | osism.commons.repository : Update package cache ------------------------- 9.10s 2025-01-16 14:52:23.298758 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.15s 2025-01-16 14:52:23.298891 | orchestrator | osism.services.docker : Install docker package -------------------------- 7.89s 2025-01-16 14:52:23.299294 | orchestrator | osism.services.docker : Install containerd package ---------------------- 6.30s 2025-01-16 14:52:23.299422 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 6.08s 2025-01-16 14:52:23.299625 | orchestrator | osism.commons.systohc : Install util-linux-extra package ---------------- 5.99s 2025-01-16 14:52:23.299891 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 4.94s 2025-01-16 14:52:23.300097 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 4.76s 2025-01-16 14:52:23.300216 | orchestrator | osism.services.rng : Install rng package -------------------------------- 4.73s 2025-01-16 14:52:23.300368 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 4.54s 2025-01-16 14:52:23.300682 | orchestrator | osism.services.docker : Add repository ---------------------------------- 4.12s 2025-01-16 14:52:23.300865 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 3.60s 2025-01-16 14:52:23.301189 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 3.58s 2025-01-16 14:52:23.301378 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 3.50s 2025-01-16 14:52:23.301560 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 3.48s 2025-01-16 14:52:23.301777 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 2.50s 2025-01-16 14:52:23.301918 | orchestrator | osism.commons.services : Populate service facts ------------------------- 2.48s 2025-01-16 14:52:23.649335 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-01-16 14:52:24.627068 | orchestrator | + osism apply network 2025-01-16 14:52:24.627197 | orchestrator | 2025-01-16 14:52:24 | INFO  | Task 2c5f720c-3eb6-42da-b8db-9e78676e3f64 (network) was prepared for execution. 2025-01-16 14:52:26.842782 | orchestrator | 2025-01-16 14:52:24 | INFO  | It takes a moment until task 2c5f720c-3eb6-42da-b8db-9e78676e3f64 (network) has been started and output is visible here. 2025-01-16 14:52:26.842901 | orchestrator | 2025-01-16 14:52:26.943842 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-01-16 14:52:26.943938 | orchestrator | 2025-01-16 14:52:26.943950 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-01-16 14:52:26.943959 | orchestrator | Thursday 16 January 2025 14:52:26 +0000 (0:00:00.134) 0:00:00.134 ****** 2025-01-16 14:52:26.944003 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:26.994811 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:27.047638 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:27.099361 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:27.149489 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:27.280671 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:27.280925 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:27.280952 | orchestrator | 2025-01-16 14:52:27.280969 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-01-16 14:52:27.280992 | orchestrator | Thursday 16 January 2025 14:52:27 +0000 (0:00:00.438) 0:00:00.572 ****** 2025-01-16 14:52:28.031658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:52:28.031836 | orchestrator | 2025-01-16 14:52:28.031854 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-01-16 14:52:28.031883 | orchestrator | Thursday 16 January 2025 14:52:28 +0000 (0:00:00.750) 0:00:01.323 ****** 2025-01-16 14:52:29.204879 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:29.205313 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:29.205344 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:29.205407 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:29.205662 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:29.205941 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:29.206395 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:29.206595 | orchestrator | 2025-01-16 14:52:29.206857 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-01-16 14:52:29.207056 | orchestrator | Thursday 16 January 2025 14:52:29 +0000 (0:00:01.172) 0:00:02.496 ****** 2025-01-16 14:52:30.255010 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:30.255154 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:30.255215 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:30.255234 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:30.255277 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:30.255439 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:30.255658 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:30.255963 | orchestrator | 2025-01-16 14:52:30.256137 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-01-16 14:52:30.256327 | orchestrator | Thursday 16 January 2025 14:52:30 +0000 (0:00:01.048) 0:00:03.544 ****** 2025-01-16 14:52:30.615642 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-01-16 14:52:30.974316 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-01-16 14:52:30.974440 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-01-16 14:52:30.974476 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-01-16 14:52:32.314344 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-01-16 14:52:32.314458 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-01-16 14:52:32.314477 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-01-16 14:52:32.314493 | orchestrator | 2025-01-16 14:52:32.314509 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-01-16 14:52:32.314526 | orchestrator | Thursday 16 January 2025 14:52:30 +0000 (0:00:00.720) 0:00:04.264 ****** 2025-01-16 14:52:32.314557 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 14:52:32.317310 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-01-16 14:52:32.317333 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-01-16 14:52:32.317347 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 14:52:32.317361 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-01-16 14:52:32.317380 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-01-16 14:52:33.273340 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-01-16 14:52:33.273439 | orchestrator | 2025-01-16 14:52:33.273455 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-01-16 14:52:33.273467 | orchestrator | Thursday 16 January 2025 14:52:32 +0000 (0:00:01.341) 0:00:05.606 ****** 2025-01-16 14:52:33.273539 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:33.273595 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:52:33.273607 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:52:33.273621 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:52:33.273796 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:52:33.273858 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:52:33.273903 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:52:33.274069 | orchestrator | 2025-01-16 14:52:33.275409 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-01-16 14:52:33.651163 | orchestrator | Thursday 16 January 2025 14:52:33 +0000 (0:00:00.957) 0:00:06.564 ****** 2025-01-16 14:52:33.651285 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 14:52:33.947723 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 14:52:33.947938 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-01-16 14:52:33.947954 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-01-16 14:52:33.948158 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-01-16 14:52:33.948192 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-01-16 14:52:33.948809 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-01-16 14:52:33.948990 | orchestrator | 2025-01-16 14:52:33.949016 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-01-16 14:52:33.949039 | orchestrator | Thursday 16 January 2025 14:52:33 +0000 (0:00:00.674) 0:00:07.238 ****** 2025-01-16 14:52:34.217310 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:34.275165 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:34.649083 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:34.649360 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:34.649389 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:34.649401 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:34.649414 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:34.649433 | orchestrator | 2025-01-16 14:52:34.754783 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-01-16 14:52:34.754874 | orchestrator | Thursday 16 January 2025 14:52:34 +0000 (0:00:00.701) 0:00:07.940 ****** 2025-01-16 14:52:34.754895 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:52:34.807083 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:52:34.858345 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:52:34.912817 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:52:34.967231 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:52:35.138308 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:52:35.138513 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:52:35.138540 | orchestrator | 2025-01-16 14:52:35.138557 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-01-16 14:52:35.138580 | orchestrator | Thursday 16 January 2025 14:52:35 +0000 (0:00:00.489) 0:00:08.429 ****** 2025-01-16 14:52:36.363844 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:37.468131 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:37.468308 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:37.468361 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:37.468377 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:37.468391 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:37.468405 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:37.468419 | orchestrator | 2025-01-16 14:52:37.468436 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-01-16 14:52:37.468451 | orchestrator | Thursday 16 January 2025 14:52:36 +0000 (0:00:01.225) 0:00:09.655 ****** 2025-01-16 14:52:37.468483 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-01-16 14:52:37.468608 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-01-16 14:52:37.468855 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-01-16 14:52:37.468982 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-01-16 14:52:37.469000 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-01-16 14:52:37.469045 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-01-16 14:52:37.469064 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-01-16 14:52:37.469087 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-01-16 14:52:37.469157 | orchestrator | 2025-01-16 14:52:37.469265 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-01-16 14:52:37.469289 | orchestrator | Thursday 16 January 2025 14:52:37 +0000 (0:00:01.103) 0:00:10.758 ****** 2025-01-16 14:52:38.409187 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:38.409406 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:52:38.409431 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:52:38.409445 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:52:38.409457 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:52:38.409494 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:52:38.409514 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:52:38.409556 | orchestrator | 2025-01-16 14:52:38.409574 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-01-16 14:52:38.409662 | orchestrator | Thursday 16 January 2025 14:52:38 +0000 (0:00:00.943) 0:00:11.701 ****** 2025-01-16 14:52:39.410630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:52:39.783653 | orchestrator | 2025-01-16 14:52:39.783728 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-01-16 14:52:39.783774 | orchestrator | Thursday 16 January 2025 14:52:39 +0000 (0:00:01.000) 0:00:12.701 ****** 2025-01-16 14:52:39.783796 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:40.747020 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:40.747285 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:40.747357 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:40.747990 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:40.748289 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:40.748614 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:40.748684 | orchestrator | 2025-01-16 14:52:40.749020 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-01-16 14:52:40.749209 | orchestrator | Thursday 16 January 2025 14:52:40 +0000 (0:00:01.338) 0:00:14.040 ****** 2025-01-16 14:52:40.858304 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:40.912457 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:52:41.048694 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:52:41.108202 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:52:41.164314 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:52:41.250361 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:52:41.250594 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:52:41.250647 | orchestrator | 2025-01-16 14:52:41.250973 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-01-16 14:52:41.251122 | orchestrator | Thursday 16 January 2025 14:52:41 +0000 (0:00:00.500) 0:00:14.540 ****** 2025-01-16 14:52:41.487954 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-01-16 14:52:41.548439 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-01-16 14:52:41.548564 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-01-16 14:52:41.608095 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-01-16 14:52:41.608195 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-01-16 14:52:41.925699 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-01-16 14:52:41.925889 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-01-16 14:52:41.925943 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-01-16 14:52:41.925955 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-01-16 14:52:41.925968 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-01-16 14:52:41.926251 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-01-16 14:52:41.926488 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-01-16 14:52:41.926829 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-01-16 14:52:41.927272 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-01-16 14:52:41.927479 | orchestrator | 2025-01-16 14:52:41.927499 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-01-16 14:52:41.927690 | orchestrator | Thursday 16 January 2025 14:52:41 +0000 (0:00:00.678) 0:00:15.219 ****** 2025-01-16 14:52:42.183235 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:52:42.240845 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:52:42.299809 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:52:42.358002 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:52:42.419436 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:52:43.250128 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:52:43.250536 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:52:43.250578 | orchestrator | 2025-01-16 14:52:43.250607 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-01-16 14:52:43.362201 | orchestrator | Thursday 16 January 2025 14:52:43 +0000 (0:00:01.321) 0:00:16.540 ****** 2025-01-16 14:52:43.362294 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:52:43.418872 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:52:43.606680 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:52:43.664509 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:52:43.723633 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:52:43.747346 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:52:43.747540 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:52:43.747574 | orchestrator | 2025-01-16 14:52:43.747611 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:52:43.748199 | orchestrator | 2025-01-16 14:52:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:52:43.748348 | orchestrator | 2025-01-16 14:52:43 | INFO  | Please wait and do not abort execution. 2025-01-16 14:52:43.748371 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 14:52:43.748829 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 14:52:43.749005 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 14:52:43.749365 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 14:52:43.749629 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 14:52:43.749897 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 14:52:43.750227 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 14:52:43.750525 | orchestrator | 2025-01-16 14:52:43.750865 | orchestrator | 2025-01-16 14:52:43.751101 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:52:43.751367 | orchestrator | Thursday 16 January 2025 14:52:43 +0000 (0:00:00.500) 0:00:17.041 ****** 2025-01-16 14:52:43.751776 | orchestrator | =============================================================================== 2025-01-16 14:52:43.752056 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.34s 2025-01-16 14:52:43.752365 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.34s 2025-01-16 14:52:43.752615 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.32s 2025-01-16 14:52:43.752892 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.23s 2025-01-16 14:52:43.753226 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.17s 2025-01-16 14:52:43.753468 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.10s 2025-01-16 14:52:43.753847 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.05s 2025-01-16 14:52:43.754085 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.00s 2025-01-16 14:52:43.754194 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 0.96s 2025-01-16 14:52:43.754638 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 0.94s 2025-01-16 14:52:43.754988 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 0.75s 2025-01-16 14:52:43.755240 | orchestrator | osism.commons.network : Create required directories --------------------- 0.72s 2025-01-16 14:52:43.755508 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.70s 2025-01-16 14:52:43.755785 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 0.68s 2025-01-16 14:52:43.755958 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 0.67s 2025-01-16 14:52:43.756086 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.50s 2025-01-16 14:52:43.756296 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.50s 2025-01-16 14:52:43.756425 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.49s 2025-01-16 14:52:43.756588 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.44s 2025-01-16 14:52:44.107420 | orchestrator | + osism apply wireguard 2025-01-16 14:52:45.144284 | orchestrator | 2025-01-16 14:52:45 | INFO  | Task 97c9790e-2f31-4e6e-a0be-f0d8898c61ac (wireguard) was prepared for execution. 2025-01-16 14:52:47.368733 | orchestrator | 2025-01-16 14:52:45 | INFO  | It takes a moment until task 97c9790e-2f31-4e6e-a0be-f0d8898c61ac (wireguard) has been started and output is visible here. 2025-01-16 14:52:47.368938 | orchestrator | 2025-01-16 14:52:48.218348 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-01-16 14:52:48.218436 | orchestrator | 2025-01-16 14:52:48.218459 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-01-16 14:52:48.218464 | orchestrator | Thursday 16 January 2025 14:52:47 +0000 (0:00:00.113) 0:00:00.113 ****** 2025-01-16 14:52:48.218481 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:48.218513 | orchestrator | 2025-01-16 14:52:51.977392 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-01-16 14:52:51.977498 | orchestrator | Thursday 16 January 2025 14:52:48 +0000 (0:00:00.851) 0:00:00.964 ****** 2025-01-16 14:52:51.977521 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:52.326898 | orchestrator | 2025-01-16 14:52:52.327004 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-01-16 14:52:52.327016 | orchestrator | Thursday 16 January 2025 14:52:51 +0000 (0:00:03.757) 0:00:04.721 ****** 2025-01-16 14:52:52.327038 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:52.585286 | orchestrator | 2025-01-16 14:52:52.585402 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-01-16 14:52:52.585422 | orchestrator | Thursday 16 January 2025 14:52:52 +0000 (0:00:00.350) 0:00:05.072 ****** 2025-01-16 14:52:52.585448 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:52.911732 | orchestrator | 2025-01-16 14:52:52.911912 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-01-16 14:52:52.911927 | orchestrator | Thursday 16 January 2025 14:52:52 +0000 (0:00:00.256) 0:00:05.329 ****** 2025-01-16 14:52:52.911946 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:52.911991 | orchestrator | 2025-01-16 14:52:52.912238 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-01-16 14:52:52.912436 | orchestrator | Thursday 16 January 2025 14:52:52 +0000 (0:00:00.329) 0:00:05.658 ****** 2025-01-16 14:52:53.263621 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:53.522169 | orchestrator | 2025-01-16 14:52:53.522254 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-01-16 14:52:53.522263 | orchestrator | Thursday 16 January 2025 14:52:53 +0000 (0:00:00.350) 0:00:06.008 ****** 2025-01-16 14:52:53.522278 | orchestrator | ok: [testbed-manager] 2025-01-16 14:52:53.522303 | orchestrator | 2025-01-16 14:52:53.522309 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-01-16 14:52:53.522315 | orchestrator | Thursday 16 January 2025 14:52:53 +0000 (0:00:00.259) 0:00:06.267 ****** 2025-01-16 14:52:54.224588 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:54.226166 | orchestrator | 2025-01-16 14:52:54.226215 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-01-16 14:52:54.226243 | orchestrator | Thursday 16 January 2025 14:52:54 +0000 (0:00:00.700) 0:00:06.968 ****** 2025-01-16 14:52:54.738174 | orchestrator | changed: [testbed-manager] => (item=None) 2025-01-16 14:52:54.738298 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:54.738315 | orchestrator | 2025-01-16 14:52:54.738328 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-01-16 14:52:54.739963 | orchestrator | Thursday 16 January 2025 14:52:54 +0000 (0:00:00.515) 0:00:07.483 ****** 2025-01-16 14:52:55.832644 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:56.396318 | orchestrator | 2025-01-16 14:52:56.396478 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-01-16 14:52:56.396502 | orchestrator | Thursday 16 January 2025 14:52:55 +0000 (0:00:01.093) 0:00:08.577 ****** 2025-01-16 14:52:56.396535 | orchestrator | changed: [testbed-manager] 2025-01-16 14:52:56.396599 | orchestrator | 2025-01-16 14:52:56.396616 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:52:56.396632 | orchestrator | 2025-01-16 14:52:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:52:56.396647 | orchestrator | 2025-01-16 14:52:56 | INFO  | Please wait and do not abort execution. 2025-01-16 14:52:56.396665 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:52:56.397155 | orchestrator | 2025-01-16 14:52:56.397272 | orchestrator | 2025-01-16 14:52:56.397362 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:52:56.397736 | orchestrator | Thursday 16 January 2025 14:52:56 +0000 (0:00:00.563) 0:00:09.140 ****** 2025-01-16 14:52:56.398112 | orchestrator | =============================================================================== 2025-01-16 14:52:56.398690 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 3.76s 2025-01-16 14:52:56.398983 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.09s 2025-01-16 14:52:56.399054 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 0.85s 2025-01-16 14:52:56.399139 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 0.70s 2025-01-16 14:52:56.399394 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.56s 2025-01-16 14:52:56.399572 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.52s 2025-01-16 14:52:56.399877 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.35s 2025-01-16 14:52:56.400051 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.35s 2025-01-16 14:52:56.400207 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.33s 2025-01-16 14:52:56.400473 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.26s 2025-01-16 14:52:56.400627 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.26s 2025-01-16 14:52:56.726233 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-01-16 14:52:56.742425 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-01-16 14:52:56.810306 | orchestrator | Dload Upload Total Spent Left Speed 2025-01-16 14:52:56.810407 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 206 0 --:--:-- --:--:-- --:--:-- 208 2025-01-16 14:52:56.814956 | orchestrator | + osism apply --environment custom workarounds 2025-01-16 14:52:57.768115 | orchestrator | 2025-01-16 14:52:57 | INFO  | Trying to run play workarounds in environment custom 2025-01-16 14:52:57.798391 | orchestrator | 2025-01-16 14:52:57 | INFO  | Task 0663e77b-b065-46a8-a501-4a65ede17efa (workarounds) was prepared for execution. 2025-01-16 14:52:59.989376 | orchestrator | 2025-01-16 14:52:57 | INFO  | It takes a moment until task 0663e77b-b065-46a8-a501-4a65ede17efa (workarounds) has been started and output is visible here. 2025-01-16 14:52:59.989520 | orchestrator | 2025-01-16 14:52:59.991996 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 14:52:59.992077 | orchestrator | 2025-01-16 14:53:00.104630 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-01-16 14:53:00.104782 | orchestrator | Thursday 16 January 2025 14:52:59 +0000 (0:00:00.100) 0:00:00.100 ****** 2025-01-16 14:53:00.104814 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-01-16 14:53:00.162455 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-01-16 14:53:00.220466 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-01-16 14:53:00.277967 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-01-16 14:53:00.335465 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-01-16 14:53:00.513823 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-01-16 14:53:00.514223 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-01-16 14:53:00.514258 | orchestrator | 2025-01-16 14:53:00.514280 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-01-16 14:53:00.514362 | orchestrator | 2025-01-16 14:53:00.514381 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-01-16 14:53:00.514399 | orchestrator | Thursday 16 January 2025 14:53:00 +0000 (0:00:00.524) 0:00:00.625 ****** 2025-01-16 14:53:02.063982 | orchestrator | ok: [testbed-manager] 2025-01-16 14:53:02.064735 | orchestrator | 2025-01-16 14:53:02.064798 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-01-16 14:53:03.359400 | orchestrator | 2025-01-16 14:53:03.359530 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-01-16 14:53:03.359551 | orchestrator | Thursday 16 January 2025 14:53:02 +0000 (0:00:01.547) 0:00:02.172 ****** 2025-01-16 14:53:03.359583 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:53:03.360899 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:53:03.360936 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:53:03.361251 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:53:03.361276 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:53:03.361290 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:53:03.361334 | orchestrator | 2025-01-16 14:53:03.361350 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-01-16 14:53:03.361377 | orchestrator | 2025-01-16 14:53:03.361594 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-01-16 14:53:03.362116 | orchestrator | Thursday 16 January 2025 14:53:03 +0000 (0:00:01.297) 0:00:03.470 ****** 2025-01-16 14:53:04.250873 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-01-16 14:53:04.250973 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-01-16 14:53:04.250985 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-01-16 14:53:04.251160 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-01-16 14:53:04.251694 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-01-16 14:53:04.251903 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-01-16 14:53:04.251927 | orchestrator | 2025-01-16 14:53:04.252114 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-01-16 14:53:04.252346 | orchestrator | Thursday 16 January 2025 14:53:04 +0000 (0:00:00.891) 0:00:04.361 ****** 2025-01-16 14:53:05.807275 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:53:05.807507 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:53:05.811006 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:53:05.811070 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:53:05.912267 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:53:05.912362 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:53:05.912372 | orchestrator | 2025-01-16 14:53:05.912382 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-01-16 14:53:05.912391 | orchestrator | Thursday 16 January 2025 14:53:05 +0000 (0:00:01.558) 0:00:05.919 ****** 2025-01-16 14:53:05.912410 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:53:05.962597 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:53:06.019665 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:53:06.196891 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:53:06.288260 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:53:06.288446 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:53:06.288475 | orchestrator | 2025-01-16 14:53:06.288637 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-01-16 14:53:06.289334 | orchestrator | 2025-01-16 14:53:06.289445 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-01-16 14:53:06.289471 | orchestrator | Thursday 16 January 2025 14:53:06 +0000 (0:00:00.479) 0:00:06.399 ****** 2025-01-16 14:53:07.285448 | orchestrator | changed: [testbed-manager] 2025-01-16 14:53:07.285640 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:53:07.285680 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:53:07.285739 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:53:07.285844 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:53:07.286082 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:53:07.286208 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:53:07.286422 | orchestrator | 2025-01-16 14:53:07.288522 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-01-16 14:53:08.269858 | orchestrator | Thursday 16 January 2025 14:53:07 +0000 (0:00:00.997) 0:00:07.397 ****** 2025-01-16 14:53:08.269992 | orchestrator | changed: [testbed-manager] 2025-01-16 14:53:08.270206 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:53:08.270262 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:53:08.270275 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:53:08.270288 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:53:08.270968 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:53:08.271074 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:53:08.271089 | orchestrator | 2025-01-16 14:53:08.271105 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-01-16 14:53:08.271320 | orchestrator | Thursday 16 January 2025 14:53:08 +0000 (0:00:00.982) 0:00:08.379 ****** 2025-01-16 14:53:09.280241 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:53:09.280649 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:53:09.280694 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:53:09.280713 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:53:09.281515 | orchestrator | ok: [testbed-manager] 2025-01-16 14:53:09.281592 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:53:09.281634 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:53:09.281647 | orchestrator | 2025-01-16 14:53:09.281656 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-01-16 14:53:09.281670 | orchestrator | Thursday 16 January 2025 14:53:09 +0000 (0:00:01.012) 0:00:09.392 ****** 2025-01-16 14:53:10.450801 | orchestrator | changed: [testbed-manager] 2025-01-16 14:53:10.451198 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:53:10.451244 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:53:10.451447 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:53:10.451476 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:53:10.452491 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:53:10.452803 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:53:10.452829 | orchestrator | 2025-01-16 14:53:10.452849 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-01-16 14:53:10.453067 | orchestrator | Thursday 16 January 2025 14:53:10 +0000 (0:00:01.170) 0:00:10.562 ****** 2025-01-16 14:53:10.558560 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:53:10.618918 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:53:10.690684 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:53:10.740883 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:53:10.884582 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:53:10.974442 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:53:10.974838 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:53:10.974917 | orchestrator | 2025-01-16 14:53:10.974947 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-01-16 14:53:10.975036 | orchestrator | 2025-01-16 14:53:10.975057 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-01-16 14:53:10.975077 | orchestrator | Thursday 16 January 2025 14:53:10 +0000 (0:00:00.523) 0:00:11.085 ****** 2025-01-16 14:53:12.504238 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:53:12.504370 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:53:12.504384 | orchestrator | ok: [testbed-manager] 2025-01-16 14:53:12.504394 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:53:12.504403 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:53:12.504412 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:53:12.504425 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:53:12.504592 | orchestrator | 2025-01-16 14:53:12.504934 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:53:12.505079 | orchestrator | 2025-01-16 14:53:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:53:12.505262 | orchestrator | 2025-01-16 14:53:12 | INFO  | Please wait and do not abort execution. 2025-01-16 14:53:12.505560 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:53:12.506234 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:12.506459 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:12.506683 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:12.506930 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:12.507148 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:12.507365 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:12.507483 | orchestrator | 2025-01-16 14:53:12.507727 | orchestrator | 2025-01-16 14:53:12.507961 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:53:12.508090 | orchestrator | Thursday 16 January 2025 14:53:12 +0000 (0:00:01.530) 0:00:12.615 ****** 2025-01-16 14:53:12.508278 | orchestrator | =============================================================================== 2025-01-16 14:53:12.508444 | orchestrator | Run update-ca-certificates ---------------------------------------------- 1.56s 2025-01-16 14:53:12.508625 | orchestrator | Apply netplan configuration --------------------------------------------- 1.55s 2025-01-16 14:53:12.508893 | orchestrator | Install python3-docker -------------------------------------------------- 1.53s 2025-01-16 14:53:12.509033 | orchestrator | Apply netplan configuration --------------------------------------------- 1.30s 2025-01-16 14:53:12.509362 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.17s 2025-01-16 14:53:12.509445 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.01s 2025-01-16 14:53:12.509856 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.00s 2025-01-16 14:53:12.509956 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 0.98s 2025-01-16 14:53:12.510177 | orchestrator | Copy custom CA certificates --------------------------------------------- 0.89s 2025-01-16 14:53:12.510270 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.52s 2025-01-16 14:53:12.510531 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.52s 2025-01-16 14:53:12.510586 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.48s 2025-01-16 14:53:12.818638 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-01-16 14:53:13.805178 | orchestrator | 2025-01-16 14:53:13 | INFO  | Task f6cf0e68-b963-4895-873d-2f58bc0e9d88 (reboot) was prepared for execution. 2025-01-16 14:53:15.987992 | orchestrator | 2025-01-16 14:53:13 | INFO  | It takes a moment until task f6cf0e68-b963-4895-873d-2f58bc0e9d88 (reboot) has been started and output is visible here. 2025-01-16 14:53:15.988139 | orchestrator | 2025-01-16 14:53:15.988415 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-01-16 14:53:16.051627 | orchestrator | 2025-01-16 14:53:16.051836 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-01-16 14:53:16.051861 | orchestrator | Thursday 16 January 2025 14:53:15 +0000 (0:00:00.103) 0:00:00.103 ****** 2025-01-16 14:53:16.051893 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:53:16.051981 | orchestrator | 2025-01-16 14:53:16.052021 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-01-16 14:53:16.052228 | orchestrator | Thursday 16 January 2025 14:53:16 +0000 (0:00:00.066) 0:00:00.169 ****** 2025-01-16 14:53:16.558583 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:53:16.627351 | orchestrator | 2025-01-16 14:53:16.627438 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-01-16 14:53:16.627447 | orchestrator | Thursday 16 January 2025 14:53:16 +0000 (0:00:00.505) 0:00:00.675 ****** 2025-01-16 14:53:16.627463 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:53:16.627585 | orchestrator | 2025-01-16 14:53:16.627598 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-01-16 14:53:16.627607 | orchestrator | 2025-01-16 14:53:16.627615 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-01-16 14:53:16.627974 | orchestrator | Thursday 16 January 2025 14:53:16 +0000 (0:00:00.069) 0:00:00.744 ****** 2025-01-16 14:53:16.696654 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:53:17.013713 | orchestrator | 2025-01-16 14:53:17.013818 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-01-16 14:53:17.013842 | orchestrator | Thursday 16 January 2025 14:53:16 +0000 (0:00:00.070) 0:00:00.814 ****** 2025-01-16 14:53:17.013864 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:53:17.085242 | orchestrator | 2025-01-16 14:53:17.085373 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-01-16 14:53:17.085403 | orchestrator | Thursday 16 January 2025 14:53:17 +0000 (0:00:00.315) 0:00:01.130 ****** 2025-01-16 14:53:17.085490 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:53:17.085657 | orchestrator | 2025-01-16 14:53:17.085693 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-01-16 14:53:17.085725 | orchestrator | 2025-01-16 14:53:17.085813 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-01-16 14:53:17.085836 | orchestrator | Thursday 16 January 2025 14:53:17 +0000 (0:00:00.071) 0:00:01.201 ****** 2025-01-16 14:53:17.146461 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:53:17.146593 | orchestrator | 2025-01-16 14:53:17.146610 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-01-16 14:53:17.146625 | orchestrator | Thursday 16 January 2025 14:53:17 +0000 (0:00:00.062) 0:00:01.264 ****** 2025-01-16 14:53:17.516515 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:53:17.516691 | orchestrator | 2025-01-16 14:53:17.516712 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-01-16 14:53:17.516728 | orchestrator | Thursday 16 January 2025 14:53:17 +0000 (0:00:00.369) 0:00:01.634 ****** 2025-01-16 14:53:17.585142 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:53:17.585476 | orchestrator | 2025-01-16 14:53:17.585517 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-01-16 14:53:17.585543 | orchestrator | 2025-01-16 14:53:17.585578 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-01-16 14:53:17.645454 | orchestrator | Thursday 16 January 2025 14:53:17 +0000 (0:00:00.067) 0:00:01.701 ****** 2025-01-16 14:53:17.645599 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:53:17.964843 | orchestrator | 2025-01-16 14:53:17.964977 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-01-16 14:53:17.964998 | orchestrator | Thursday 16 January 2025 14:53:17 +0000 (0:00:00.061) 0:00:01.763 ****** 2025-01-16 14:53:17.965027 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:53:18.035971 | orchestrator | 2025-01-16 14:53:18.036098 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-01-16 14:53:18.036123 | orchestrator | Thursday 16 January 2025 14:53:17 +0000 (0:00:00.319) 0:00:02.082 ****** 2025-01-16 14:53:18.036198 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:53:18.036276 | orchestrator | 2025-01-16 14:53:18.036300 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-01-16 14:53:18.036418 | orchestrator | 2025-01-16 14:53:18.036448 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-01-16 14:53:18.036650 | orchestrator | Thursday 16 January 2025 14:53:18 +0000 (0:00:00.068) 0:00:02.151 ****** 2025-01-16 14:53:18.108454 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:53:18.436345 | orchestrator | 2025-01-16 14:53:18.436462 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-01-16 14:53:18.436481 | orchestrator | Thursday 16 January 2025 14:53:18 +0000 (0:00:00.072) 0:00:02.223 ****** 2025-01-16 14:53:18.436511 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:53:18.501630 | orchestrator | 2025-01-16 14:53:18.501856 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-01-16 14:53:18.501882 | orchestrator | Thursday 16 January 2025 14:53:18 +0000 (0:00:00.330) 0:00:02.554 ****** 2025-01-16 14:53:18.501942 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:53:18.502090 | orchestrator | 2025-01-16 14:53:18.502114 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-01-16 14:53:18.502130 | orchestrator | 2025-01-16 14:53:18.502146 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-01-16 14:53:18.502167 | orchestrator | Thursday 16 January 2025 14:53:18 +0000 (0:00:00.063) 0:00:02.618 ****** 2025-01-16 14:53:18.561464 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:53:18.877234 | orchestrator | 2025-01-16 14:53:18.877362 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-01-16 14:53:18.877382 | orchestrator | Thursday 16 January 2025 14:53:18 +0000 (0:00:00.061) 0:00:02.679 ****** 2025-01-16 14:53:18.877417 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:53:18.893914 | orchestrator | 2025-01-16 14:53:18.894170 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-01-16 14:53:18.894203 | orchestrator | Thursday 16 January 2025 14:53:18 +0000 (0:00:00.315) 0:00:02.994 ****** 2025-01-16 14:53:18.894235 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:53:18.894357 | orchestrator | 2025-01-16 14:53:18.894457 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:53:18.894501 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:18.894566 | orchestrator | 2025-01-16 14:53:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:53:18.894625 | orchestrator | 2025-01-16 14:53:18 | INFO  | Please wait and do not abort execution. 2025-01-16 14:53:18.894682 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:18.894743 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:18.895396 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:18.895635 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:18.895717 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:53:18.895914 | orchestrator | 2025-01-16 14:53:18.895983 | orchestrator | 2025-01-16 14:53:18.896087 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:53:18.896129 | orchestrator | Thursday 16 January 2025 14:53:18 +0000 (0:00:00.017) 0:00:03.012 ****** 2025-01-16 14:53:18.896469 | orchestrator | =============================================================================== 2025-01-16 14:53:18.896631 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 2.16s 2025-01-16 14:53:18.896676 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.39s 2025-01-16 14:53:18.897147 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.36s 2025-01-16 14:53:19.150131 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-01-16 14:53:20.167497 | orchestrator | 2025-01-16 14:53:20 | INFO  | Task 57fc9bd8-2a23-413f-89ae-1ecc672eea2e (wait-for-connection) was prepared for execution. 2025-01-16 14:53:22.300256 | orchestrator | 2025-01-16 14:53:20 | INFO  | It takes a moment until task 57fc9bd8-2a23-413f-89ae-1ecc672eea2e (wait-for-connection) has been started and output is visible here. 2025-01-16 14:53:22.300374 | orchestrator | 2025-01-16 14:53:22.300836 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-01-16 14:53:22.300878 | orchestrator | 2025-01-16 14:53:22.300889 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-01-16 14:53:22.300904 | orchestrator | Thursday 16 January 2025 14:53:22 +0000 (0:00:00.116) 0:00:00.116 ****** 2025-01-16 14:53:33.242423 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:53:33.242713 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:53:33.242831 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:53:33.242854 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:53:33.242868 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:53:33.242882 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:53:33.242931 | orchestrator | 2025-01-16 14:53:33.243011 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:53:33.243033 | orchestrator | 2025-01-16 14:53:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:53:33.243056 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:53:33.247648 | orchestrator | 2025-01-16 14:53:33 | INFO  | Please wait and do not abort execution. 2025-01-16 14:53:33.247751 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:53:33.502993 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:53:33.503145 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:53:33.503179 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:53:33.503204 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:53:33.503230 | orchestrator | 2025-01-16 14:53:33.503255 | orchestrator | 2025-01-16 14:53:33.503280 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:53:33.503306 | orchestrator | Thursday 16 January 2025 14:53:33 +0000 (0:00:10.941) 0:00:11.058 ****** 2025-01-16 14:53:33.503329 | orchestrator | =============================================================================== 2025-01-16 14:53:33.503355 | orchestrator | Wait until remote system is reachable ---------------------------------- 10.94s 2025-01-16 14:53:33.503401 | orchestrator | + osism apply hddtemp 2025-01-16 14:53:34.483719 | orchestrator | 2025-01-16 14:53:34 | INFO  | Task 77106948-8f76-4ea1-a3cf-4231fe4e36fc (hddtemp) was prepared for execution. 2025-01-16 14:53:36.666433 | orchestrator | 2025-01-16 14:53:34 | INFO  | It takes a moment until task 77106948-8f76-4ea1-a3cf-4231fe4e36fc (hddtemp) has been started and output is visible here. 2025-01-16 14:53:36.666552 | orchestrator | 2025-01-16 14:53:36.669042 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-01-16 14:53:36.669062 | orchestrator | 2025-01-16 14:53:36.768613 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-01-16 14:53:36.768783 | orchestrator | Thursday 16 January 2025 14:53:36 +0000 (0:00:00.137) 0:00:00.138 ****** 2025-01-16 14:53:36.768816 | orchestrator | ok: [testbed-manager] 2025-01-16 14:53:36.820969 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:53:36.872831 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:53:36.923328 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:53:36.976446 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:53:37.114092 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:53:37.891572 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:53:37.891702 | orchestrator | 2025-01-16 14:53:37.891735 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-01-16 14:53:37.891792 | orchestrator | Thursday 16 January 2025 14:53:37 +0000 (0:00:00.449) 0:00:00.587 ****** 2025-01-16 14:53:37.891832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:53:39.081366 | orchestrator | 2025-01-16 14:53:39.081540 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-01-16 14:53:39.081556 | orchestrator | Thursday 16 January 2025 14:53:37 +0000 (0:00:00.776) 0:00:01.363 ****** 2025-01-16 14:53:39.081577 | orchestrator | ok: [testbed-manager] 2025-01-16 14:53:39.085267 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:53:39.085321 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:53:39.085329 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:53:39.085344 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:53:39.085544 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:53:39.085562 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:53:39.085749 | orchestrator | 2025-01-16 14:53:39.086160 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-01-16 14:53:39.086293 | orchestrator | Thursday 16 January 2025 14:53:39 +0000 (0:00:01.191) 0:00:02.555 ****** 2025-01-16 14:53:39.487517 | orchestrator | changed: [testbed-manager] 2025-01-16 14:53:39.545726 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:53:39.799741 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:53:39.799947 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:53:39.799972 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:53:39.799991 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:53:39.800821 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:53:41.395863 | orchestrator | 2025-01-16 14:53:41.396030 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-01-16 14:53:41.396068 | orchestrator | Thursday 16 January 2025 14:53:39 +0000 (0:00:00.716) 0:00:03.271 ****** 2025-01-16 14:53:41.396115 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:53:41.567922 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:53:41.568022 | orchestrator | ok: [testbed-manager] 2025-01-16 14:53:41.568035 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:53:41.568045 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:53:41.568055 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:53:41.568064 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:53:41.568074 | orchestrator | 2025-01-16 14:53:41.568084 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-01-16 14:53:41.568095 | orchestrator | Thursday 16 January 2025 14:53:41 +0000 (0:00:01.594) 0:00:04.865 ****** 2025-01-16 14:53:41.568118 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:53:41.623960 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:53:41.682529 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:53:41.734343 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:53:41.811918 | orchestrator | changed: [testbed-manager] 2025-01-16 14:53:41.812125 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:53:41.812501 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:53:41.812814 | orchestrator | 2025-01-16 14:53:41.813569 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-01-16 14:53:48.059330 | orchestrator | Thursday 16 January 2025 14:53:41 +0000 (0:00:00.420) 0:00:05.285 ****** 2025-01-16 14:53:48.059541 | orchestrator | changed: [testbed-manager] 2025-01-16 14:53:48.059752 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:53:48.059791 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:53:48.059801 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:53:48.059810 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:53:48.059823 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:53:48.060283 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:53:48.060748 | orchestrator | 2025-01-16 14:53:48.061229 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-01-16 14:53:48.061584 | orchestrator | Thursday 16 January 2025 14:53:48 +0000 (0:00:06.244) 0:00:11.529 ****** 2025-01-16 14:53:48.847853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 14:53:48.848032 | orchestrator | 2025-01-16 14:53:48.848050 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-01-16 14:53:48.848068 | orchestrator | Thursday 16 January 2025 14:53:48 +0000 (0:00:00.790) 0:00:12.320 ****** 2025-01-16 14:53:50.033634 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:53:50.033831 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:53:50.034097 | orchestrator | changed: [testbed-manager] 2025-01-16 14:53:50.034425 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:53:50.035135 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:53:50.035192 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:53:50.035621 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:53:50.036141 | orchestrator | 2025-01-16 14:53:50.036426 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:53:50.036688 | orchestrator | 2025-01-16 14:53:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:53:50.037124 | orchestrator | 2025-01-16 14:53:50 | INFO  | Please wait and do not abort execution. 2025-01-16 14:53:50.037461 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:53:50.037880 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:53:50.038879 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:53:50.039319 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:53:50.039610 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:53:50.040029 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:53:50.040445 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:53:50.041084 | orchestrator | 2025-01-16 14:53:50.041381 | orchestrator | 2025-01-16 14:53:50.041406 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:53:50.041845 | orchestrator | Thursday 16 January 2025 14:53:50 +0000 (0:00:01.187) 0:00:13.507 ****** 2025-01-16 14:53:50.042205 | orchestrator | =============================================================================== 2025-01-16 14:53:50.042380 | orchestrator | osism.services.hddtemp : Install lm-sensors ----------------------------- 6.24s 2025-01-16 14:53:50.042539 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.59s 2025-01-16 14:53:50.042912 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.19s 2025-01-16 14:53:50.043164 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.19s 2025-01-16 14:53:50.043415 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 0.79s 2025-01-16 14:53:50.043781 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.78s 2025-01-16 14:53:50.044024 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.72s 2025-01-16 14:53:50.044633 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.45s 2025-01-16 14:53:50.427741 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.42s 2025-01-16 14:53:50.428057 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-01-16 14:54:29.375373 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-01-16 14:54:29.390413 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-01-16 14:54:29.390534 | orchestrator | + local max_attempts=60 2025-01-16 14:54:29.390556 | orchestrator | + local name=ceph-ansible 2025-01-16 14:54:29.390579 | orchestrator | + local attempt_num=1 2025-01-16 14:54:29.390600 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-01-16 14:54:29.390637 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-01-16 14:54:29.407067 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-01-16 14:54:29.407174 | orchestrator | + local max_attempts=60 2025-01-16 14:54:29.407193 | orchestrator | + local name=kolla-ansible 2025-01-16 14:54:29.407208 | orchestrator | + local attempt_num=1 2025-01-16 14:54:29.407222 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-01-16 14:54:29.407254 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-01-16 14:54:29.423433 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-01-16 14:54:29.423769 | orchestrator | + local max_attempts=60 2025-01-16 14:54:29.423870 | orchestrator | + local name=osism-ansible 2025-01-16 14:54:29.423889 | orchestrator | + local attempt_num=1 2025-01-16 14:54:29.423908 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-01-16 14:54:29.423973 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-01-16 14:54:29.539823 | orchestrator | + [[ true == \t\r\u\e ]] 2025-01-16 14:54:29.539909 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-01-16 14:54:29.539929 | orchestrator | ARA in ceph-ansible already disabled. 2025-01-16 14:54:29.653854 | orchestrator | ARA in kolla-ansible already disabled. 2025-01-16 14:54:29.747118 | orchestrator | ARA in osism-ansible already disabled. 2025-01-16 14:54:29.833858 | orchestrator | ARA in osism-kubernetes already disabled. 2025-01-16 14:54:30.895653 | orchestrator | + osism apply gather-facts 2025-01-16 14:54:30.895765 | orchestrator | 2025-01-16 14:54:30 | INFO  | Task 1af1c1e9-8be2-481c-a379-5cd8d4affea1 (gather-facts) was prepared for execution. 2025-01-16 14:54:40.355890 | orchestrator | 2025-01-16 14:54:30 | INFO  | It takes a moment until task 1af1c1e9-8be2-481c-a379-5cd8d4affea1 (gather-facts) has been started and output is visible here. 2025-01-16 14:54:40.356206 | orchestrator | 2025-01-16 14:54:44.842661 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-01-16 14:54:44.842917 | orchestrator | 2025-01-16 14:54:44.842941 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-01-16 14:54:44.842956 | orchestrator | Thursday 16 January 2025 14:54:40 +0000 (0:00:00.902) 0:00:00.902 ****** 2025-01-16 14:54:44.843063 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:54:44.843333 | orchestrator | ok: [testbed-manager] 2025-01-16 14:54:44.843427 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:54:44.843446 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:54:44.843461 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:54:44.843474 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:54:44.843488 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:54:44.843509 | orchestrator | 2025-01-16 14:54:44.843759 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-01-16 14:54:44.843886 | orchestrator | 2025-01-16 14:54:44.845338 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-01-16 14:54:44.966113 | orchestrator | Thursday 16 January 2025 14:54:44 +0000 (0:00:04.487) 0:00:05.390 ****** 2025-01-16 14:54:44.966284 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:54:45.046177 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:54:45.108544 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:54:45.177477 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:54:45.249163 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:54:46.567755 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:54:46.568142 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:54:46.568177 | orchestrator | 2025-01-16 14:54:46.568299 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:54:46.568324 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:54:46.568369 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:54:46.568385 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:54:46.568399 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:54:46.568413 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:54:46.568427 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:54:46.568442 | orchestrator | 2025-01-16 14:54:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:54:46.568458 | orchestrator | 2025-01-16 14:54:46 | INFO  | Please wait and do not abort execution. 2025-01-16 14:54:46.568482 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 14:54:46.570261 | orchestrator | 2025-01-16 14:54:46.570289 | orchestrator | 2025-01-16 14:54:46.570304 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:54:46.570319 | orchestrator | Thursday 16 January 2025 14:54:46 +0000 (0:00:01.724) 0:00:07.115 ****** 2025-01-16 14:54:46.570340 | orchestrator | =============================================================================== 2025-01-16 14:54:46.846310 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.49s 2025-01-16 14:54:46.846460 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.72s 2025-01-16 14:54:46.846512 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helper-services.sh /usr/local/bin/deploy-helper 2025-01-16 14:54:46.853191 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-01-16 14:54:46.861454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-01-16 14:54:46.867529 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-services.sh /usr/local/bin/deploy-ceph 2025-01-16 14:54:46.873975 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-rook-services.sh /usr/local/bin/deploy-rook 2025-01-16 14:54:46.880274 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure-services.sh /usr/local/bin/deploy-infrastructure 2025-01-16 14:54:46.886938 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack-services.sh /usr/local/bin/deploy-openstack 2025-01-16 14:54:46.893731 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring-services.sh /usr/local/bin/deploy-monitoring 2025-01-16 14:54:46.900060 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-01-16 14:54:46.906235 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-01-16 14:54:46.912692 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-services.sh /usr/local/bin/upgrade-ceph 2025-01-16 14:54:46.920239 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure-services.sh /usr/local/bin/upgrade-infrastructure 2025-01-16 14:54:46.927095 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack-services.sh /usr/local/bin/upgrade-openstack 2025-01-16 14:54:46.935088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring-services.sh /usr/local/bin/upgrade-monitoring 2025-01-16 14:54:46.941373 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack-services.sh /usr/local/bin/bootstrap-openstack 2025-01-16 14:54:46.947848 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-01-16 14:54:46.955382 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-01-16 14:54:46.961755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-01-16 14:54:46.967769 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-01-16 14:54:46.974264 | orchestrator | + [[ false == \t\r\u\e ]] 2025-01-16 14:54:47.473079 | orchestrator | changed 2025-01-16 14:54:47.548210 | 2025-01-16 14:54:47.548410 | TASK [Deploy services] 2025-01-16 14:54:47.657676 | orchestrator | skipping: Conditional result was False 2025-01-16 14:54:47.676733 | 2025-01-16 14:54:47.676857 | TASK [Deploy in a nutshell] 2025-01-16 14:54:48.381891 | orchestrator | + set -e 2025-01-16 14:54:48.382204 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-01-16 14:54:48.382253 | orchestrator | ++ export INTERACTIVE=false 2025-01-16 14:54:48.382275 | orchestrator | ++ INTERACTIVE=false 2025-01-16 14:54:48.382320 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-01-16 14:54:48.382339 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-01-16 14:54:48.382354 | orchestrator | + source /opt/manager-vars.sh 2025-01-16 14:54:48.382377 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-01-16 14:54:48.382400 | orchestrator | ++ NUMBER_OF_NODES=6 2025-01-16 14:54:48.382417 | orchestrator | ++ export CEPH_VERSION=quincy 2025-01-16 14:54:48.382432 | orchestrator | ++ CEPH_VERSION=quincy 2025-01-16 14:54:48.382448 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-01-16 14:54:48.382463 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-01-16 14:54:48.382477 | orchestrator | ++ export MANAGER_VERSION=latest 2025-01-16 14:54:48.382491 | orchestrator | ++ MANAGER_VERSION=latest 2025-01-16 14:54:48.382505 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-01-16 14:54:48.382520 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-01-16 14:54:48.382533 | orchestrator | ++ export ARA=false 2025-01-16 14:54:48.382548 | orchestrator | ++ ARA=false 2025-01-16 14:54:48.382562 | orchestrator | ++ export TEMPEST=false 2025-01-16 14:54:48.382577 | orchestrator | ++ TEMPEST=false 2025-01-16 14:54:48.382590 | orchestrator | ++ export IS_ZUUL=true 2025-01-16 14:54:48.382604 | orchestrator | ++ IS_ZUUL=true 2025-01-16 14:54:48.382618 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 14:54:48.382645 | orchestrator | 2025-01-16 14:54:48.400195 | orchestrator | # PULL IMAGES 2025-01-16 14:54:48.400339 | orchestrator | 2025-01-16 14:54:48.400366 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 14:54:48.400387 | orchestrator | ++ export EXTERNAL_API=false 2025-01-16 14:54:48.400406 | orchestrator | ++ EXTERNAL_API=false 2025-01-16 14:54:48.400427 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-01-16 14:54:48.400444 | orchestrator | ++ IMAGE_USER=ubuntu 2025-01-16 14:54:48.400478 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-01-16 14:54:48.400499 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-01-16 14:54:48.400517 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-01-16 14:54:48.400536 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-01-16 14:54:48.400553 | orchestrator | + echo 2025-01-16 14:54:48.400570 | orchestrator | + echo '# PULL IMAGES' 2025-01-16 14:54:48.400588 | orchestrator | + echo 2025-01-16 14:54:48.400605 | orchestrator | ++ semver latest 7.0.0 2025-01-16 14:54:48.400651 | orchestrator | + [[ -1 -ge 0 ]] 2025-01-16 14:54:49.306188 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-01-16 14:54:49.306320 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-01-16 14:54:49.306378 | orchestrator | 2025-01-16 14:54:49 | INFO  | Trying to run play pull-images in environment custom 2025-01-16 14:54:49.335890 | orchestrator | 2025-01-16 14:54:49 | INFO  | Task 0950ed04-be82-47ea-83ca-3c7d7400d60a (pull-images) was prepared for execution. 2025-01-16 14:54:52.391755 | orchestrator | 2025-01-16 14:54:49 | INFO  | It takes a moment until task 0950ed04-be82-47ea-83ca-3c7d7400d60a (pull-images) has been started and output is visible here. 2025-01-16 14:54:52.391929 | orchestrator | 2025-01-16 14:55:14.829148 | orchestrator | PLAY [Pull images] ************************************************************* 2025-01-16 14:55:14.829279 | orchestrator | 2025-01-16 14:55:14.829308 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-01-16 14:55:14.829321 | orchestrator | Thursday 16 January 2025 14:54:52 +0000 (0:00:00.868) 0:00:00.868 ****** 2025-01-16 14:55:14.829350 | orchestrator | changed: [testbed-manager] 2025-01-16 14:55:49.750616 | orchestrator | 2025-01-16 14:55:49.750750 | orchestrator | TASK [Pull other images] ******************************************************* 2025-01-16 14:55:49.750768 | orchestrator | Thursday 16 January 2025 14:55:14 +0000 (0:00:22.437) 0:00:23.306 ****** 2025-01-16 14:55:49.750799 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-01-16 14:55:49.751066 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-01-16 14:55:49.751097 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-01-16 14:55:49.751124 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-01-16 14:55:49.751161 | orchestrator | changed: [testbed-manager] => (item=common) 2025-01-16 14:55:49.751183 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-01-16 14:55:49.751207 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-01-16 14:55:49.751259 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-01-16 14:55:49.751292 | orchestrator | changed: [testbed-manager] => (item=heat) 2025-01-16 14:55:49.752871 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-01-16 14:55:49.756563 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-01-16 14:55:49.757829 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-01-16 14:55:49.757884 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-01-16 14:55:49.757896 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-01-16 14:55:49.757907 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-01-16 14:55:49.757917 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-01-16 14:55:49.757928 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-01-16 14:55:49.757938 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-01-16 14:55:49.757948 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-01-16 14:55:49.757958 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-01-16 14:55:49.757968 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-01-16 14:55:49.757980 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-01-16 14:55:49.757991 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-01-16 14:55:49.758001 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-01-16 14:55:49.758011 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-01-16 14:55:49.758074 | orchestrator | 2025-01-16 14:55:49.758428 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:55:49.758851 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:55:49.758925 | orchestrator | 2025-01-16 14:55:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:55:49.758934 | orchestrator | 2025-01-16 14:55:49 | INFO  | Please wait and do not abort execution. 2025-01-16 14:55:49.758949 | orchestrator | 2025-01-16 14:55:49.759081 | orchestrator | 2025-01-16 14:55:49.759093 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:55:49.759325 | orchestrator | Thursday 16 January 2025 14:55:49 +0000 (0:00:34.921) 0:00:58.227 ****** 2025-01-16 14:55:49.759590 | orchestrator | =============================================================================== 2025-01-16 14:55:49.759610 | orchestrator | Pull other images ------------------------------------------------------ 34.92s 2025-01-16 14:55:51.203567 | orchestrator | Pull keystone image ---------------------------------------------------- 22.44s 2025-01-16 14:55:51.203673 | orchestrator | 2025-01-16 14:55:51 | INFO  | Trying to run play wipe-partitions in environment custom 2025-01-16 14:55:51.235247 | orchestrator | 2025-01-16 14:55:51 | INFO  | Task 20eaf09d-d286-4603-8f1a-7db80b2c4b12 (wipe-partitions) was prepared for execution. 2025-01-16 14:55:54.856520 | orchestrator | 2025-01-16 14:55:51 | INFO  | It takes a moment until task 20eaf09d-d286-4603-8f1a-7db80b2c4b12 (wipe-partitions) has been started and output is visible here. 2025-01-16 14:55:54.856639 | orchestrator | 2025-01-16 14:55:54.858598 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-01-16 14:55:56.364308 | orchestrator | 2025-01-16 14:55:56.364437 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-01-16 14:55:56.364460 | orchestrator | Thursday 16 January 2025 14:55:54 +0000 (0:00:01.298) 0:00:01.298 ****** 2025-01-16 14:55:56.364492 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:55:56.367748 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:55:56.367885 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:55:56.367915 | orchestrator | 2025-01-16 14:55:56.367940 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-01-16 14:55:56.367974 | orchestrator | Thursday 16 January 2025 14:55:56 +0000 (0:00:01.508) 0:00:02.806 ****** 2025-01-16 14:55:56.488164 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:55:57.541013 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:55:57.545740 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:55:57.545799 | orchestrator | 2025-01-16 14:55:59.072556 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-01-16 14:55:59.072683 | orchestrator | Thursday 16 January 2025 14:55:57 +0000 (0:00:01.172) 0:00:03.979 ****** 2025-01-16 14:55:59.072720 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:55:59.073336 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:55:59.073414 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:55:59.073435 | orchestrator | 2025-01-16 14:55:59.073538 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-01-16 14:55:59.073854 | orchestrator | Thursday 16 January 2025 14:55:59 +0000 (0:00:01.537) 0:00:05.517 ****** 2025-01-16 14:55:59.230936 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:00.244795 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:00.245074 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:00.245771 | orchestrator | 2025-01-16 14:56:02.097001 | orchestrator | TASK [Check device availability] *********************************************** 2025-01-16 14:56:02.097104 | orchestrator | Thursday 16 January 2025 14:56:00 +0000 (0:00:01.173) 0:00:06.690 ****** 2025-01-16 14:56:02.097127 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-01-16 14:56:02.098982 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-01-16 14:56:02.099008 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-01-16 14:56:02.099015 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-01-16 14:56:02.099032 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-01-16 14:56:02.099038 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-01-16 14:56:02.099043 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-01-16 14:56:02.099049 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-01-16 14:56:02.099055 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-01-16 14:56:02.099061 | orchestrator | 2025-01-16 14:56:02.099072 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-01-16 14:56:03.918398 | orchestrator | Thursday 16 January 2025 14:56:02 +0000 (0:00:01.848) 0:00:08.539 ****** 2025-01-16 14:56:03.918535 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-01-16 14:56:03.919013 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-01-16 14:56:03.920040 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-01-16 14:56:03.920177 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-01-16 14:56:03.920222 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-01-16 14:56:03.920338 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-01-16 14:56:03.920378 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-01-16 14:56:03.920398 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-01-16 14:56:03.920441 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-01-16 14:56:03.920458 | orchestrator | 2025-01-16 14:56:03.920859 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-01-16 14:56:06.371629 | orchestrator | Thursday 16 January 2025 14:56:03 +0000 (0:00:01.825) 0:00:10.364 ****** 2025-01-16 14:56:06.371792 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-01-16 14:56:06.371915 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-01-16 14:56:06.371930 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-01-16 14:56:06.371958 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-01-16 14:56:06.371969 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-01-16 14:56:06.371978 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-01-16 14:56:06.371987 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-01-16 14:56:06.371995 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-01-16 14:56:06.372004 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-01-16 14:56:06.372033 | orchestrator | 2025-01-16 14:56:06.372043 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-01-16 14:56:06.372056 | orchestrator | Thursday 16 January 2025 14:56:06 +0000 (0:00:02.451) 0:00:12.815 ****** 2025-01-16 14:56:07.457660 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:56:07.458081 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:56:07.458477 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:56:07.458938 | orchestrator | 2025-01-16 14:56:07.459162 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-01-16 14:56:07.459859 | orchestrator | Thursday 16 January 2025 14:56:07 +0000 (0:00:01.086) 0:00:13.902 ****** 2025-01-16 14:56:09.147984 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:56:09.148110 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:56:09.148122 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:56:09.148127 | orchestrator | 2025-01-16 14:56:09.148137 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:56:09.148444 | orchestrator | 2025-01-16 14:56:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:56:09.148618 | orchestrator | 2025-01-16 14:56:09 | INFO  | Please wait and do not abort execution. 2025-01-16 14:56:09.149387 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:09.149556 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:09.149900 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:09.150105 | orchestrator | 2025-01-16 14:56:09.150305 | orchestrator | 2025-01-16 14:56:09.150705 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:56:09.152232 | orchestrator | Thursday 16 January 2025 14:56:09 +0000 (0:00:01.684) 0:00:15.587 ****** 2025-01-16 14:56:09.153117 | orchestrator | =============================================================================== 2025-01-16 14:56:09.154097 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.45s 2025-01-16 14:56:09.154125 | orchestrator | Check device availability ----------------------------------------------- 1.85s 2025-01-16 14:56:09.154136 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.83s 2025-01-16 14:56:09.154152 | orchestrator | Request device events from the kernel ----------------------------------- 1.68s 2025-01-16 14:56:09.155201 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 1.54s 2025-01-16 14:56:09.155499 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.51s 2025-01-16 14:56:09.155535 | orchestrator | Remove all ceph related logical devices --------------------------------- 1.17s 2025-01-16 14:56:09.155550 | orchestrator | Remove all rook related logical devices --------------------------------- 1.17s 2025-01-16 14:56:09.155570 | orchestrator | Reload udev rules ------------------------------------------------------- 1.09s 2025-01-16 14:56:10.737545 | orchestrator | 2025-01-16 14:56:10 | INFO  | Task f8e17a49-e245-4b19-86af-98a612f4d532 (facts) was prepared for execution. 2025-01-16 14:56:15.669066 | orchestrator | 2025-01-16 14:56:10 | INFO  | It takes a moment until task f8e17a49-e245-4b19-86af-98a612f4d532 (facts) has been started and output is visible here. 2025-01-16 14:56:15.669224 | orchestrator | 2025-01-16 14:56:15.669280 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-01-16 14:56:15.669292 | orchestrator | 2025-01-16 14:56:15.669509 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-01-16 14:56:15.669588 | orchestrator | Thursday 16 January 2025 14:56:15 +0000 (0:00:01.893) 0:00:01.893 ****** 2025-01-16 14:56:18.443593 | orchestrator | ok: [testbed-manager] 2025-01-16 14:56:18.443800 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:56:18.443917 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:56:18.444098 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:56:18.444122 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:56:18.444310 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:56:18.444451 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:56:18.444671 | orchestrator | 2025-01-16 14:56:18.444888 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-01-16 14:56:18.445062 | orchestrator | Thursday 16 January 2025 14:56:18 +0000 (0:00:02.777) 0:00:04.671 ****** 2025-01-16 14:56:18.563757 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:56:18.628569 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:56:18.696132 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:56:18.792201 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:56:18.879158 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:20.243306 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:20.243559 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:20.243592 | orchestrator | 2025-01-16 14:56:20.243729 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-01-16 14:56:20.245255 | orchestrator | 2025-01-16 14:56:20.247596 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-01-16 14:56:24.974709 | orchestrator | Thursday 16 January 2025 14:56:20 +0000 (0:00:01.802) 0:00:06.473 ****** 2025-01-16 14:56:24.974900 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:56:24.976866 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:56:24.977114 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:56:24.977457 | orchestrator | ok: [testbed-manager] 2025-01-16 14:56:24.977511 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:56:24.977662 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:56:24.977702 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:56:24.978095 | orchestrator | 2025-01-16 14:56:24.980033 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-01-16 14:56:24.980497 | orchestrator | 2025-01-16 14:56:24.981111 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-01-16 14:56:24.982225 | orchestrator | Thursday 16 January 2025 14:56:24 +0000 (0:00:04.729) 0:00:11.203 ****** 2025-01-16 14:56:25.146703 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:56:25.225136 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:56:25.328052 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:56:25.422943 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:56:25.498611 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:28.430099 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:28.431771 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:28.431876 | orchestrator | 2025-01-16 14:56:28.431887 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:56:28.431897 | orchestrator | 2025-01-16 14:56:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:56:28.432094 | orchestrator | 2025-01-16 14:56:28 | INFO  | Please wait and do not abort execution. 2025-01-16 14:56:28.432107 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:28.432158 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:28.432429 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:28.432634 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:28.432807 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:28.433008 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:28.433202 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:56:28.433384 | orchestrator | 2025-01-16 14:56:28.433640 | orchestrator | 2025-01-16 14:56:28.433804 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:56:28.433944 | orchestrator | Thursday 16 January 2025 14:56:28 +0000 (0:00:03.456) 0:00:14.659 ****** 2025-01-16 14:56:28.434120 | orchestrator | =============================================================================== 2025-01-16 14:56:28.434360 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.73s 2025-01-16 14:56:28.434536 | orchestrator | Gather facts for all hosts ---------------------------------------------- 3.46s 2025-01-16 14:56:28.434699 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.78s 2025-01-16 14:56:28.435017 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.80s 2025-01-16 14:56:29.911984 | orchestrator | 2025-01-16 14:56:29 | INFO  | Task 98210f7b-d53c-437a-8938-f60d82b666ac (ceph-configure-lvm-volumes) was prepared for execution. 2025-01-16 14:56:33.305621 | orchestrator | 2025-01-16 14:56:29 | INFO  | It takes a moment until task 98210f7b-d53c-437a-8938-f60d82b666ac (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-01-16 14:56:33.305779 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-01-16 14:56:33.705441 | orchestrator | 2025-01-16 14:56:33.870225 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-01-16 14:56:33.870390 | orchestrator | 2025-01-16 14:56:33.870424 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-01-16 14:56:33.870442 | orchestrator | Thursday 16 January 2025 14:56:33 +0000 (0:00:00.338) 0:00:00.338 ****** 2025-01-16 14:56:33.870475 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 14:56:33.871125 | orchestrator | 2025-01-16 14:56:33.871158 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-01-16 14:56:33.871179 | orchestrator | Thursday 16 January 2025 14:56:33 +0000 (0:00:00.167) 0:00:00.506 ****** 2025-01-16 14:56:34.038771 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:56:34.386349 | orchestrator | 2025-01-16 14:56:34.386442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:34.386453 | orchestrator | Thursday 16 January 2025 14:56:34 +0000 (0:00:00.163) 0:00:00.670 ****** 2025-01-16 14:56:34.386473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-01-16 14:56:34.388768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-01-16 14:56:34.388910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-01-16 14:56:34.388977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-01-16 14:56:34.389094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-01-16 14:56:34.389320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-01-16 14:56:34.389691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-01-16 14:56:34.390008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-01-16 14:56:34.390139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-01-16 14:56:34.392420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-01-16 14:56:34.392939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-01-16 14:56:34.393100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-01-16 14:56:34.393159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-01-16 14:56:34.393243 | orchestrator | 2025-01-16 14:56:34.393466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:34.393509 | orchestrator | Thursday 16 January 2025 14:56:34 +0000 (0:00:00.351) 0:00:01.022 ****** 2025-01-16 14:56:34.527547 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:34.527721 | orchestrator | 2025-01-16 14:56:34.529578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:34.530182 | orchestrator | Thursday 16 January 2025 14:56:34 +0000 (0:00:00.141) 0:00:01.163 ****** 2025-01-16 14:56:34.660885 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:34.661164 | orchestrator | 2025-01-16 14:56:34.661254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:34.661648 | orchestrator | Thursday 16 January 2025 14:56:34 +0000 (0:00:00.133) 0:00:01.297 ****** 2025-01-16 14:56:34.796655 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:34.796775 | orchestrator | 2025-01-16 14:56:34.796791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:34.798053 | orchestrator | Thursday 16 January 2025 14:56:34 +0000 (0:00:00.133) 0:00:01.430 ****** 2025-01-16 14:56:34.926531 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:34.927032 | orchestrator | 2025-01-16 14:56:34.927066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:34.927284 | orchestrator | Thursday 16 January 2025 14:56:34 +0000 (0:00:00.132) 0:00:01.562 ****** 2025-01-16 14:56:35.059985 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:35.060316 | orchestrator | 2025-01-16 14:56:35.060465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:35.060917 | orchestrator | Thursday 16 January 2025 14:56:35 +0000 (0:00:00.134) 0:00:01.696 ****** 2025-01-16 14:56:35.190632 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:35.190932 | orchestrator | 2025-01-16 14:56:35.190970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:35.190994 | orchestrator | Thursday 16 January 2025 14:56:35 +0000 (0:00:00.130) 0:00:01.827 ****** 2025-01-16 14:56:35.317955 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:35.436880 | orchestrator | 2025-01-16 14:56:35.436997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:35.437012 | orchestrator | Thursday 16 January 2025 14:56:35 +0000 (0:00:00.125) 0:00:01.952 ****** 2025-01-16 14:56:35.437038 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:35.437122 | orchestrator | 2025-01-16 14:56:35.437201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:35.437416 | orchestrator | Thursday 16 January 2025 14:56:35 +0000 (0:00:00.121) 0:00:02.073 ****** 2025-01-16 14:56:35.834665 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b) 2025-01-16 14:56:35.834813 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b) 2025-01-16 14:56:35.834874 | orchestrator | 2025-01-16 14:56:35.835135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:35.835595 | orchestrator | Thursday 16 January 2025 14:56:35 +0000 (0:00:00.397) 0:00:02.470 ****** 2025-01-16 14:56:36.276877 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a3fa75ed-12ad-4d98-b1e3-06058efbf95a) 2025-01-16 14:56:36.280288 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a3fa75ed-12ad-4d98-b1e3-06058efbf95a) 2025-01-16 14:56:36.763140 | orchestrator | 2025-01-16 14:56:36.763226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:36.763233 | orchestrator | Thursday 16 January 2025 14:56:36 +0000 (0:00:00.441) 0:00:02.912 ****** 2025-01-16 14:56:36.763270 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0646438b-3566-4bd7-ac9f-c7444a60ff3f) 2025-01-16 14:56:36.763514 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0646438b-3566-4bd7-ac9f-c7444a60ff3f) 2025-01-16 14:56:36.763788 | orchestrator | 2025-01-16 14:56:36.764238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:36.764511 | orchestrator | Thursday 16 January 2025 14:56:36 +0000 (0:00:00.487) 0:00:03.400 ****** 2025-01-16 14:56:37.075285 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_72b30f3d-ea4f-4fbe-a722-d77662b0ee19) 2025-01-16 14:56:37.075502 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_72b30f3d-ea4f-4fbe-a722-d77662b0ee19) 2025-01-16 14:56:37.075611 | orchestrator | 2025-01-16 14:56:37.075669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:37.075690 | orchestrator | Thursday 16 January 2025 14:56:37 +0000 (0:00:00.311) 0:00:03.711 ****** 2025-01-16 14:56:37.298871 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-01-16 14:56:37.299087 | orchestrator | 2025-01-16 14:56:37.299114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:37.299137 | orchestrator | Thursday 16 January 2025 14:56:37 +0000 (0:00:00.223) 0:00:03.934 ****** 2025-01-16 14:56:37.559900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-01-16 14:56:37.560208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-01-16 14:56:37.560226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-01-16 14:56:37.560239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-01-16 14:56:37.560245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-01-16 14:56:37.560253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-01-16 14:56:37.560412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-01-16 14:56:37.560593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-01-16 14:56:37.560733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-01-16 14:56:37.560955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-01-16 14:56:37.561058 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-01-16 14:56:37.561237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-01-16 14:56:37.561406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-01-16 14:56:37.561542 | orchestrator | 2025-01-16 14:56:37.561680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:37.561857 | orchestrator | Thursday 16 January 2025 14:56:37 +0000 (0:00:00.261) 0:00:04.196 ****** 2025-01-16 14:56:37.689359 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:37.689650 | orchestrator | 2025-01-16 14:56:37.821187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:37.821278 | orchestrator | Thursday 16 January 2025 14:56:37 +0000 (0:00:00.130) 0:00:04.326 ****** 2025-01-16 14:56:37.821297 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:37.961308 | orchestrator | 2025-01-16 14:56:37.961462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:37.961509 | orchestrator | Thursday 16 January 2025 14:56:37 +0000 (0:00:00.131) 0:00:04.458 ****** 2025-01-16 14:56:37.961554 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:37.962997 | orchestrator | 2025-01-16 14:56:37.963383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:38.102799 | orchestrator | Thursday 16 January 2025 14:56:37 +0000 (0:00:00.139) 0:00:04.597 ****** 2025-01-16 14:56:38.102957 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:38.232354 | orchestrator | 2025-01-16 14:56:38.232461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:38.232480 | orchestrator | Thursday 16 January 2025 14:56:38 +0000 (0:00:00.139) 0:00:04.736 ****** 2025-01-16 14:56:38.232513 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:38.232595 | orchestrator | 2025-01-16 14:56:38.232619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:38.232905 | orchestrator | Thursday 16 January 2025 14:56:38 +0000 (0:00:00.132) 0:00:04.869 ****** 2025-01-16 14:56:38.586377 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:38.587840 | orchestrator | 2025-01-16 14:56:38.587897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:38.588054 | orchestrator | Thursday 16 January 2025 14:56:38 +0000 (0:00:00.353) 0:00:05.222 ****** 2025-01-16 14:56:38.719437 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:38.719585 | orchestrator | 2025-01-16 14:56:38.719643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:38.719699 | orchestrator | Thursday 16 January 2025 14:56:38 +0000 (0:00:00.132) 0:00:05.355 ****** 2025-01-16 14:56:38.849733 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:38.849927 | orchestrator | 2025-01-16 14:56:38.850197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:38.850215 | orchestrator | Thursday 16 January 2025 14:56:38 +0000 (0:00:00.130) 0:00:05.486 ****** 2025-01-16 14:56:39.290700 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-01-16 14:56:39.290971 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-01-16 14:56:39.291020 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-01-16 14:56:39.291434 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-01-16 14:56:39.291482 | orchestrator | 2025-01-16 14:56:39.292329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:39.429100 | orchestrator | Thursday 16 January 2025 14:56:39 +0000 (0:00:00.439) 0:00:05.926 ****** 2025-01-16 14:56:39.429255 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:39.429320 | orchestrator | 2025-01-16 14:56:39.429339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:39.429361 | orchestrator | Thursday 16 January 2025 14:56:39 +0000 (0:00:00.139) 0:00:06.065 ****** 2025-01-16 14:56:39.564291 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:39.564505 | orchestrator | 2025-01-16 14:56:39.564725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:39.564803 | orchestrator | Thursday 16 January 2025 14:56:39 +0000 (0:00:00.134) 0:00:06.199 ****** 2025-01-16 14:56:39.697186 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:39.697376 | orchestrator | 2025-01-16 14:56:39.697398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:39.697620 | orchestrator | Thursday 16 January 2025 14:56:39 +0000 (0:00:00.132) 0:00:06.332 ****** 2025-01-16 14:56:39.825601 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:39.825792 | orchestrator | 2025-01-16 14:56:39.825820 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-01-16 14:56:39.825891 | orchestrator | Thursday 16 January 2025 14:56:39 +0000 (0:00:00.129) 0:00:06.462 ****** 2025-01-16 14:56:39.938910 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-01-16 14:56:39.939477 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-01-16 14:56:39.939804 | orchestrator | 2025-01-16 14:56:39.940189 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-01-16 14:56:40.028132 | orchestrator | Thursday 16 January 2025 14:56:39 +0000 (0:00:00.113) 0:00:06.575 ****** 2025-01-16 14:56:40.028251 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:40.028401 | orchestrator | 2025-01-16 14:56:40.028424 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-01-16 14:56:40.114313 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.087) 0:00:06.662 ****** 2025-01-16 14:56:40.114408 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:40.114446 | orchestrator | 2025-01-16 14:56:40.114456 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-01-16 14:56:40.114590 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.087) 0:00:06.750 ****** 2025-01-16 14:56:40.313979 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:40.314249 | orchestrator | 2025-01-16 14:56:40.314279 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-01-16 14:56:40.314517 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.200) 0:00:06.950 ****** 2025-01-16 14:56:40.400139 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:56:40.400306 | orchestrator | 2025-01-16 14:56:40.400332 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-01-16 14:56:40.400608 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.085) 0:00:07.036 ****** 2025-01-16 14:56:40.511610 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53488163-bd74-50cc-bfa0-f1a94ed01f33'}}) 2025-01-16 14:56:40.511931 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '562c7eeb-0cc2-5747-a030-082dcf3dd7cc'}}) 2025-01-16 14:56:40.511958 | orchestrator | 2025-01-16 14:56:40.511974 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-01-16 14:56:40.512333 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.110) 0:00:07.147 ****** 2025-01-16 14:56:40.605547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53488163-bd74-50cc-bfa0-f1a94ed01f33'}})  2025-01-16 14:56:40.608625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '562c7eeb-0cc2-5747-a030-082dcf3dd7cc'}})  2025-01-16 14:56:40.710268 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:40.710357 | orchestrator | 2025-01-16 14:56:40.710367 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-01-16 14:56:40.710375 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.094) 0:00:07.242 ****** 2025-01-16 14:56:40.710399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53488163-bd74-50cc-bfa0-f1a94ed01f33'}})  2025-01-16 14:56:40.710643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '562c7eeb-0cc2-5747-a030-082dcf3dd7cc'}})  2025-01-16 14:56:40.710675 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:40.711047 | orchestrator | 2025-01-16 14:56:40.711453 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-01-16 14:56:40.711711 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.104) 0:00:07.347 ****** 2025-01-16 14:56:40.808562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53488163-bd74-50cc-bfa0-f1a94ed01f33'}})  2025-01-16 14:56:40.809149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '562c7eeb-0cc2-5747-a030-082dcf3dd7cc'}})  2025-01-16 14:56:40.809233 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:40.809257 | orchestrator | 2025-01-16 14:56:40.809334 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-01-16 14:56:40.809352 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.097) 0:00:07.445 ****** 2025-01-16 14:56:40.898114 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:56:40.898349 | orchestrator | 2025-01-16 14:56:40.898406 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-01-16 14:56:40.898502 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.089) 0:00:07.534 ****** 2025-01-16 14:56:40.988716 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:56:40.988902 | orchestrator | 2025-01-16 14:56:40.988928 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-01-16 14:56:40.989010 | orchestrator | Thursday 16 January 2025 14:56:40 +0000 (0:00:00.090) 0:00:07.624 ****** 2025-01-16 14:56:41.075113 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:41.075250 | orchestrator | 2025-01-16 14:56:41.075264 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-01-16 14:56:41.075274 | orchestrator | Thursday 16 January 2025 14:56:41 +0000 (0:00:00.085) 0:00:07.710 ****** 2025-01-16 14:56:41.153036 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:41.153681 | orchestrator | 2025-01-16 14:56:41.154005 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-01-16 14:56:41.154113 | orchestrator | Thursday 16 January 2025 14:56:41 +0000 (0:00:00.079) 0:00:07.790 ****** 2025-01-16 14:56:41.238321 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:41.238473 | orchestrator | 2025-01-16 14:56:41.238510 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-01-16 14:56:41.238566 | orchestrator | Thursday 16 January 2025 14:56:41 +0000 (0:00:00.085) 0:00:07.875 ****** 2025-01-16 14:56:41.323585 | orchestrator | ok: [testbed-node-3] => { 2025-01-16 14:56:41.323763 | orchestrator |  "ceph_osd_devices": { 2025-01-16 14:56:41.323791 | orchestrator |  "sdb": { 2025-01-16 14:56:41.323990 | orchestrator |  "osd_lvm_uuid": "53488163-bd74-50cc-bfa0-f1a94ed01f33" 2025-01-16 14:56:41.324338 | orchestrator |  }, 2025-01-16 14:56:41.324672 | orchestrator |  "sdc": { 2025-01-16 14:56:41.324926 | orchestrator |  "osd_lvm_uuid": "562c7eeb-0cc2-5747-a030-082dcf3dd7cc" 2025-01-16 14:56:41.325248 | orchestrator |  } 2025-01-16 14:56:41.325458 | orchestrator |  } 2025-01-16 14:56:41.325927 | orchestrator | } 2025-01-16 14:56:41.326311 | orchestrator | 2025-01-16 14:56:41.326406 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-01-16 14:56:41.326466 | orchestrator | Thursday 16 January 2025 14:56:41 +0000 (0:00:00.085) 0:00:07.960 ****** 2025-01-16 14:56:41.513335 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:41.513623 | orchestrator | 2025-01-16 14:56:41.513683 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-01-16 14:56:41.599138 | orchestrator | Thursday 16 January 2025 14:56:41 +0000 (0:00:00.189) 0:00:08.149 ****** 2025-01-16 14:56:41.599242 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:41.675776 | orchestrator | 2025-01-16 14:56:41.675962 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-01-16 14:56:41.675989 | orchestrator | Thursday 16 January 2025 14:56:41 +0000 (0:00:00.083) 0:00:08.232 ****** 2025-01-16 14:56:41.676047 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:56:41.676204 | orchestrator | 2025-01-16 14:56:41.861505 | orchestrator | TASK [Print configuration data] ************************************************ 2025-01-16 14:56:41.861621 | orchestrator | Thursday 16 January 2025 14:56:41 +0000 (0:00:00.079) 0:00:08.312 ****** 2025-01-16 14:56:41.861660 | orchestrator | changed: [testbed-node-3] => { 2025-01-16 14:56:41.861880 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-01-16 14:56:41.861921 | orchestrator |  "ceph_osd_devices": { 2025-01-16 14:56:41.862403 | orchestrator |  "sdb": { 2025-01-16 14:56:41.862437 | orchestrator |  "osd_lvm_uuid": "53488163-bd74-50cc-bfa0-f1a94ed01f33" 2025-01-16 14:56:41.863204 | orchestrator |  }, 2025-01-16 14:56:41.863452 | orchestrator |  "sdc": { 2025-01-16 14:56:41.863482 | orchestrator |  "osd_lvm_uuid": "562c7eeb-0cc2-5747-a030-082dcf3dd7cc" 2025-01-16 14:56:41.863612 | orchestrator |  } 2025-01-16 14:56:41.864276 | orchestrator |  }, 2025-01-16 14:56:41.865056 | orchestrator |  "lvm_volumes": [ 2025-01-16 14:56:41.865172 | orchestrator |  { 2025-01-16 14:56:41.865232 | orchestrator |  "data": "osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33", 2025-01-16 14:56:41.865358 | orchestrator |  "data_vg": "ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33" 2025-01-16 14:56:41.865767 | orchestrator |  }, 2025-01-16 14:56:41.866111 | orchestrator |  { 2025-01-16 14:56:41.866452 | orchestrator |  "data": "osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc", 2025-01-16 14:56:41.867029 | orchestrator |  "data_vg": "ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc" 2025-01-16 14:56:41.867181 | orchestrator |  } 2025-01-16 14:56:41.867581 | orchestrator |  ] 2025-01-16 14:56:41.867936 | orchestrator |  } 2025-01-16 14:56:41.868320 | orchestrator | } 2025-01-16 14:56:41.868675 | orchestrator | 2025-01-16 14:56:41.869227 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-01-16 14:56:41.870375 | orchestrator | Thursday 16 January 2025 14:56:41 +0000 (0:00:00.185) 0:00:08.498 ****** 2025-01-16 14:56:43.313094 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 14:56:43.313444 | orchestrator | 2025-01-16 14:56:43.313485 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-01-16 14:56:43.313501 | orchestrator | 2025-01-16 14:56:43.313679 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-01-16 14:56:43.479254 | orchestrator | Thursday 16 January 2025 14:56:43 +0000 (0:00:01.450) 0:00:09.948 ****** 2025-01-16 14:56:43.479373 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-01-16 14:56:43.631520 | orchestrator | 2025-01-16 14:56:43.631641 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-01-16 14:56:43.631657 | orchestrator | Thursday 16 January 2025 14:56:43 +0000 (0:00:00.167) 0:00:10.115 ****** 2025-01-16 14:56:43.631681 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:56:43.632147 | orchestrator | 2025-01-16 14:56:43.632182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:43.632201 | orchestrator | Thursday 16 January 2025 14:56:43 +0000 (0:00:00.151) 0:00:10.267 ****** 2025-01-16 14:56:43.885917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-01-16 14:56:43.886195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-01-16 14:56:43.886228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-01-16 14:56:43.886513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-01-16 14:56:43.886545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-01-16 14:56:43.886760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-01-16 14:56:43.887131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-01-16 14:56:43.887446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-01-16 14:56:43.890356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-01-16 14:56:44.012335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-01-16 14:56:44.012478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-01-16 14:56:44.012494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-01-16 14:56:44.012503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-01-16 14:56:44.012512 | orchestrator | 2025-01-16 14:56:44.012521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:44.012530 | orchestrator | Thursday 16 January 2025 14:56:43 +0000 (0:00:00.254) 0:00:10.522 ****** 2025-01-16 14:56:44.012555 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:44.012610 | orchestrator | 2025-01-16 14:56:44.012621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:44.012633 | orchestrator | Thursday 16 January 2025 14:56:44 +0000 (0:00:00.126) 0:00:10.649 ****** 2025-01-16 14:56:44.144399 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:44.144533 | orchestrator | 2025-01-16 14:56:44.144567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:44.144624 | orchestrator | Thursday 16 January 2025 14:56:44 +0000 (0:00:00.132) 0:00:10.781 ****** 2025-01-16 14:56:44.273962 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:44.274266 | orchestrator | 2025-01-16 14:56:44.274312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:44.274338 | orchestrator | Thursday 16 January 2025 14:56:44 +0000 (0:00:00.128) 0:00:10.910 ****** 2025-01-16 14:56:44.407491 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:44.407675 | orchestrator | 2025-01-16 14:56:44.408108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:44.408227 | orchestrator | Thursday 16 January 2025 14:56:44 +0000 (0:00:00.132) 0:00:11.042 ****** 2025-01-16 14:56:44.544456 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:44.544670 | orchestrator | 2025-01-16 14:56:44.544718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:44.545090 | orchestrator | Thursday 16 January 2025 14:56:44 +0000 (0:00:00.138) 0:00:11.181 ****** 2025-01-16 14:56:44.672376 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:44.672622 | orchestrator | 2025-01-16 14:56:44.672647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:44.672667 | orchestrator | Thursday 16 January 2025 14:56:44 +0000 (0:00:00.127) 0:00:11.308 ****** 2025-01-16 14:56:44.920929 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:44.921147 | orchestrator | 2025-01-16 14:56:44.921171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:44.921303 | orchestrator | Thursday 16 January 2025 14:56:44 +0000 (0:00:00.248) 0:00:11.557 ****** 2025-01-16 14:56:45.052030 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:45.052228 | orchestrator | 2025-01-16 14:56:45.052249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:45.052280 | orchestrator | Thursday 16 January 2025 14:56:45 +0000 (0:00:00.130) 0:00:11.688 ****** 2025-01-16 14:56:45.328998 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd) 2025-01-16 14:56:45.329261 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd) 2025-01-16 14:56:45.329296 | orchestrator | 2025-01-16 14:56:45.329330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:45.329668 | orchestrator | Thursday 16 January 2025 14:56:45 +0000 (0:00:00.276) 0:00:11.964 ****** 2025-01-16 14:56:45.602252 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d1e8c7e9-38c3-4780-8ab7-178f632f9eb8) 2025-01-16 14:56:45.602636 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d1e8c7e9-38c3-4780-8ab7-178f632f9eb8) 2025-01-16 14:56:45.602871 | orchestrator | 2025-01-16 14:56:45.602889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:45.603115 | orchestrator | Thursday 16 January 2025 14:56:45 +0000 (0:00:00.274) 0:00:12.239 ****** 2025-01-16 14:56:45.899210 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_511497a6-ce11-47ca-8c02-acccaddecbc9) 2025-01-16 14:56:45.899667 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_511497a6-ce11-47ca-8c02-acccaddecbc9) 2025-01-16 14:56:45.899982 | orchestrator | 2025-01-16 14:56:45.900014 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:45.900033 | orchestrator | Thursday 16 January 2025 14:56:45 +0000 (0:00:00.296) 0:00:12.536 ****** 2025-01-16 14:56:46.204604 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f7bd705e-b5e0-4446-bf55-1dfa4188ee04) 2025-01-16 14:56:46.433041 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f7bd705e-b5e0-4446-bf55-1dfa4188ee04) 2025-01-16 14:56:46.433154 | orchestrator | 2025-01-16 14:56:46.433169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:46.433204 | orchestrator | Thursday 16 January 2025 14:56:46 +0000 (0:00:00.303) 0:00:12.839 ****** 2025-01-16 14:56:46.433228 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-01-16 14:56:46.433279 | orchestrator | 2025-01-16 14:56:46.433290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:46.433301 | orchestrator | Thursday 16 January 2025 14:56:46 +0000 (0:00:00.228) 0:00:13.068 ****** 2025-01-16 14:56:46.701870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-01-16 14:56:46.702114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-01-16 14:56:46.702146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-01-16 14:56:46.702194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-01-16 14:56:46.702211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-01-16 14:56:46.702303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-01-16 14:56:46.703368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-01-16 14:56:46.703520 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-01-16 14:56:46.703593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-01-16 14:56:46.703680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-01-16 14:56:46.703974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-01-16 14:56:46.704169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-01-16 14:56:46.704394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-01-16 14:56:46.704496 | orchestrator | 2025-01-16 14:56:46.704709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:46.705068 | orchestrator | Thursday 16 January 2025 14:56:46 +0000 (0:00:00.268) 0:00:13.336 ****** 2025-01-16 14:56:46.830759 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:46.831278 | orchestrator | 2025-01-16 14:56:46.831422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:46.831443 | orchestrator | Thursday 16 January 2025 14:56:46 +0000 (0:00:00.130) 0:00:13.467 ****** 2025-01-16 14:56:47.188753 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:47.190370 | orchestrator | 2025-01-16 14:56:47.322651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:47.322772 | orchestrator | Thursday 16 January 2025 14:56:47 +0000 (0:00:00.357) 0:00:13.824 ****** 2025-01-16 14:56:47.322808 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:47.459961 | orchestrator | 2025-01-16 14:56:47.460072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:47.460089 | orchestrator | Thursday 16 January 2025 14:56:47 +0000 (0:00:00.131) 0:00:13.956 ****** 2025-01-16 14:56:47.460117 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:47.591908 | orchestrator | 2025-01-16 14:56:47.592042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:47.592057 | orchestrator | Thursday 16 January 2025 14:56:47 +0000 (0:00:00.135) 0:00:14.092 ****** 2025-01-16 14:56:47.592084 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:47.738295 | orchestrator | 2025-01-16 14:56:47.738419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:47.738439 | orchestrator | Thursday 16 January 2025 14:56:47 +0000 (0:00:00.136) 0:00:14.228 ****** 2025-01-16 14:56:47.738484 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:47.739329 | orchestrator | 2025-01-16 14:56:47.739454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:47.739492 | orchestrator | Thursday 16 January 2025 14:56:47 +0000 (0:00:00.143) 0:00:14.372 ****** 2025-01-16 14:56:47.865699 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:47.998388 | orchestrator | 2025-01-16 14:56:47.998545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:47.998570 | orchestrator | Thursday 16 January 2025 14:56:47 +0000 (0:00:00.129) 0:00:14.502 ****** 2025-01-16 14:56:47.998603 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:47.998716 | orchestrator | 2025-01-16 14:56:47.998742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:48.549887 | orchestrator | Thursday 16 January 2025 14:56:47 +0000 (0:00:00.133) 0:00:14.635 ****** 2025-01-16 14:56:48.550092 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-01-16 14:56:48.551146 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-01-16 14:56:48.551187 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-01-16 14:56:48.551205 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-01-16 14:56:48.680272 | orchestrator | 2025-01-16 14:56:48.680369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:48.680377 | orchestrator | Thursday 16 January 2025 14:56:48 +0000 (0:00:00.550) 0:00:15.186 ****** 2025-01-16 14:56:48.680395 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:48.681368 | orchestrator | 2025-01-16 14:56:48.681393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:48.814359 | orchestrator | Thursday 16 January 2025 14:56:48 +0000 (0:00:00.131) 0:00:15.317 ****** 2025-01-16 14:56:48.814529 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:48.949173 | orchestrator | 2025-01-16 14:56:48.949366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:48.949397 | orchestrator | Thursday 16 January 2025 14:56:48 +0000 (0:00:00.133) 0:00:15.450 ****** 2025-01-16 14:56:48.949475 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:48.949577 | orchestrator | 2025-01-16 14:56:48.949600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:49.349100 | orchestrator | Thursday 16 January 2025 14:56:48 +0000 (0:00:00.135) 0:00:15.585 ****** 2025-01-16 14:56:49.349273 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:49.349368 | orchestrator | 2025-01-16 14:56:49.470118 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-01-16 14:56:49.470256 | orchestrator | Thursday 16 January 2025 14:56:49 +0000 (0:00:00.399) 0:00:15.985 ****** 2025-01-16 14:56:49.470279 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-01-16 14:56:49.471549 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-01-16 14:56:49.471643 | orchestrator | 2025-01-16 14:56:49.471693 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-01-16 14:56:49.472350 | orchestrator | Thursday 16 January 2025 14:56:49 +0000 (0:00:00.118) 0:00:16.104 ****** 2025-01-16 14:56:49.561258 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:49.651240 | orchestrator | 2025-01-16 14:56:49.651356 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-01-16 14:56:49.651367 | orchestrator | Thursday 16 January 2025 14:56:49 +0000 (0:00:00.091) 0:00:16.196 ****** 2025-01-16 14:56:49.651387 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:49.651433 | orchestrator | 2025-01-16 14:56:49.651451 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-01-16 14:56:49.651462 | orchestrator | Thursday 16 January 2025 14:56:49 +0000 (0:00:00.091) 0:00:16.287 ****** 2025-01-16 14:56:49.742292 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:49.837030 | orchestrator | 2025-01-16 14:56:49.837140 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-01-16 14:56:49.837153 | orchestrator | Thursday 16 January 2025 14:56:49 +0000 (0:00:00.090) 0:00:16.378 ****** 2025-01-16 14:56:49.837178 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:56:49.837289 | orchestrator | 2025-01-16 14:56:49.837442 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-01-16 14:56:49.837704 | orchestrator | Thursday 16 January 2025 14:56:49 +0000 (0:00:00.094) 0:00:16.473 ****** 2025-01-16 14:56:49.954185 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9c27d09-d80a-5255-9afb-1d5e2e5f2f02'}}) 2025-01-16 14:56:49.954295 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6463fb-b573-5867-8a5d-b884b3259bdd'}}) 2025-01-16 14:56:49.954314 | orchestrator | 2025-01-16 14:56:49.954381 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-01-16 14:56:49.954658 | orchestrator | Thursday 16 January 2025 14:56:49 +0000 (0:00:00.117) 0:00:16.590 ****** 2025-01-16 14:56:50.062375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9c27d09-d80a-5255-9afb-1d5e2e5f2f02'}})  2025-01-16 14:56:50.062560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6463fb-b573-5867-8a5d-b884b3259bdd'}})  2025-01-16 14:56:50.062579 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:50.062821 | orchestrator | 2025-01-16 14:56:50.063210 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-01-16 14:56:50.063330 | orchestrator | Thursday 16 January 2025 14:56:50 +0000 (0:00:00.108) 0:00:16.699 ****** 2025-01-16 14:56:50.174953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9c27d09-d80a-5255-9afb-1d5e2e5f2f02'}})  2025-01-16 14:56:50.175291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6463fb-b573-5867-8a5d-b884b3259bdd'}})  2025-01-16 14:56:50.175388 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:50.175410 | orchestrator | 2025-01-16 14:56:50.175447 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-01-16 14:56:50.275305 | orchestrator | Thursday 16 January 2025 14:56:50 +0000 (0:00:00.111) 0:00:16.811 ****** 2025-01-16 14:56:50.275435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9c27d09-d80a-5255-9afb-1d5e2e5f2f02'}})  2025-01-16 14:56:50.275782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6463fb-b573-5867-8a5d-b884b3259bdd'}})  2025-01-16 14:56:50.275799 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:50.275816 | orchestrator | 2025-01-16 14:56:50.276134 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-01-16 14:56:50.276328 | orchestrator | Thursday 16 January 2025 14:56:50 +0000 (0:00:00.100) 0:00:16.912 ****** 2025-01-16 14:56:50.370239 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:56:50.370376 | orchestrator | 2025-01-16 14:56:50.370398 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-01-16 14:56:50.370420 | orchestrator | Thursday 16 January 2025 14:56:50 +0000 (0:00:00.094) 0:00:17.006 ****** 2025-01-16 14:56:50.462681 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:56:50.462780 | orchestrator | 2025-01-16 14:56:50.462793 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-01-16 14:56:50.464288 | orchestrator | Thursday 16 January 2025 14:56:50 +0000 (0:00:00.091) 0:00:17.098 ****** 2025-01-16 14:56:50.670252 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:50.670675 | orchestrator | 2025-01-16 14:56:50.670702 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-01-16 14:56:50.758150 | orchestrator | Thursday 16 January 2025 14:56:50 +0000 (0:00:00.208) 0:00:17.306 ****** 2025-01-16 14:56:50.758405 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:50.758559 | orchestrator | 2025-01-16 14:56:50.758581 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-01-16 14:56:50.758599 | orchestrator | Thursday 16 January 2025 14:56:50 +0000 (0:00:00.087) 0:00:17.394 ****** 2025-01-16 14:56:50.855002 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:50.855132 | orchestrator | 2025-01-16 14:56:50.855164 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-01-16 14:56:50.855634 | orchestrator | Thursday 16 January 2025 14:56:50 +0000 (0:00:00.096) 0:00:17.491 ****** 2025-01-16 14:56:50.951691 | orchestrator | ok: [testbed-node-4] => { 2025-01-16 14:56:50.951817 | orchestrator |  "ceph_osd_devices": { 2025-01-16 14:56:50.952056 | orchestrator |  "sdb": { 2025-01-16 14:56:50.952407 | orchestrator |  "osd_lvm_uuid": "d9c27d09-d80a-5255-9afb-1d5e2e5f2f02" 2025-01-16 14:56:50.952878 | orchestrator |  }, 2025-01-16 14:56:50.953287 | orchestrator |  "sdc": { 2025-01-16 14:56:50.953493 | orchestrator |  "osd_lvm_uuid": "9e6463fb-b573-5867-8a5d-b884b3259bdd" 2025-01-16 14:56:50.956452 | orchestrator |  } 2025-01-16 14:56:51.042083 | orchestrator |  } 2025-01-16 14:56:51.042202 | orchestrator | } 2025-01-16 14:56:51.042217 | orchestrator | 2025-01-16 14:56:51.042229 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-01-16 14:56:51.042240 | orchestrator | Thursday 16 January 2025 14:56:50 +0000 (0:00:00.097) 0:00:17.588 ****** 2025-01-16 14:56:51.042259 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:51.042474 | orchestrator | 2025-01-16 14:56:51.042486 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-01-16 14:56:51.042961 | orchestrator | Thursday 16 January 2025 14:56:51 +0000 (0:00:00.090) 0:00:17.678 ****** 2025-01-16 14:56:51.126138 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:51.126281 | orchestrator | 2025-01-16 14:56:51.126446 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-01-16 14:56:51.126690 | orchestrator | Thursday 16 January 2025 14:56:51 +0000 (0:00:00.084) 0:00:17.762 ****** 2025-01-16 14:56:51.210885 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:56:51.211532 | orchestrator | 2025-01-16 14:56:51.212051 | orchestrator | TASK [Print configuration data] ************************************************ 2025-01-16 14:56:51.390563 | orchestrator | Thursday 16 January 2025 14:56:51 +0000 (0:00:00.084) 0:00:17.847 ****** 2025-01-16 14:56:51.390695 | orchestrator | changed: [testbed-node-4] => { 2025-01-16 14:56:51.390879 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-01-16 14:56:51.391126 | orchestrator |  "ceph_osd_devices": { 2025-01-16 14:56:51.391619 | orchestrator |  "sdb": { 2025-01-16 14:56:51.393784 | orchestrator |  "osd_lvm_uuid": "d9c27d09-d80a-5255-9afb-1d5e2e5f2f02" 2025-01-16 14:56:51.394237 | orchestrator |  }, 2025-01-16 14:56:51.394274 | orchestrator |  "sdc": { 2025-01-16 14:56:51.394295 | orchestrator |  "osd_lvm_uuid": "9e6463fb-b573-5867-8a5d-b884b3259bdd" 2025-01-16 14:56:51.394306 | orchestrator |  } 2025-01-16 14:56:51.394316 | orchestrator |  }, 2025-01-16 14:56:51.394326 | orchestrator |  "lvm_volumes": [ 2025-01-16 14:56:51.394336 | orchestrator |  { 2025-01-16 14:56:51.394345 | orchestrator |  "data": "osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02", 2025-01-16 14:56:51.394364 | orchestrator |  "data_vg": "ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02" 2025-01-16 14:56:51.394784 | orchestrator |  }, 2025-01-16 14:56:51.395023 | orchestrator |  { 2025-01-16 14:56:51.395286 | orchestrator |  "data": "osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd", 2025-01-16 14:56:51.395674 | orchestrator |  "data_vg": "ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd" 2025-01-16 14:56:51.396213 | orchestrator |  } 2025-01-16 14:56:51.396480 | orchestrator |  ] 2025-01-16 14:56:51.396941 | orchestrator |  } 2025-01-16 14:56:51.397365 | orchestrator | } 2025-01-16 14:56:51.397577 | orchestrator | 2025-01-16 14:56:51.398188 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-01-16 14:56:51.398330 | orchestrator | Thursday 16 January 2025 14:56:51 +0000 (0:00:00.179) 0:00:18.027 ****** 2025-01-16 14:56:52.410555 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-01-16 14:56:52.410721 | orchestrator | 2025-01-16 14:56:52.410943 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-01-16 14:56:52.411029 | orchestrator | 2025-01-16 14:56:52.411052 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-01-16 14:56:52.411102 | orchestrator | Thursday 16 January 2025 14:56:52 +0000 (0:00:01.019) 0:00:19.046 ****** 2025-01-16 14:56:52.569771 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-01-16 14:56:52.712521 | orchestrator | 2025-01-16 14:56:52.712667 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-01-16 14:56:52.712685 | orchestrator | Thursday 16 January 2025 14:56:52 +0000 (0:00:00.158) 0:00:19.205 ****** 2025-01-16 14:56:52.712709 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:56:52.712758 | orchestrator | 2025-01-16 14:56:52.712770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:52.712785 | orchestrator | Thursday 16 January 2025 14:56:52 +0000 (0:00:00.143) 0:00:19.349 ****** 2025-01-16 14:56:52.964677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-01-16 14:56:52.965053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-01-16 14:56:52.965090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-01-16 14:56:52.965296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-01-16 14:56:52.965521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-01-16 14:56:52.965976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-01-16 14:56:52.966313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-01-16 14:56:52.966589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-01-16 14:56:52.969597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-01-16 14:56:52.969801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-01-16 14:56:52.969898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-01-16 14:56:52.970075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-01-16 14:56:52.970107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-01-16 14:56:52.970212 | orchestrator | 2025-01-16 14:56:52.970454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:52.970604 | orchestrator | Thursday 16 January 2025 14:56:52 +0000 (0:00:00.252) 0:00:19.601 ****** 2025-01-16 14:56:53.092370 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:53.092591 | orchestrator | 2025-01-16 14:56:53.092629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:53.092813 | orchestrator | Thursday 16 January 2025 14:56:53 +0000 (0:00:00.127) 0:00:19.728 ****** 2025-01-16 14:56:53.215087 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:53.215329 | orchestrator | 2025-01-16 14:56:53.215374 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:53.467682 | orchestrator | Thursday 16 January 2025 14:56:53 +0000 (0:00:00.122) 0:00:19.851 ****** 2025-01-16 14:56:53.467824 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:53.468223 | orchestrator | 2025-01-16 14:56:53.468259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:53.468338 | orchestrator | Thursday 16 January 2025 14:56:53 +0000 (0:00:00.252) 0:00:20.104 ****** 2025-01-16 14:56:53.598394 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:53.727300 | orchestrator | 2025-01-16 14:56:53.727411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:53.728198 | orchestrator | Thursday 16 January 2025 14:56:53 +0000 (0:00:00.131) 0:00:20.235 ****** 2025-01-16 14:56:53.728245 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:53.855456 | orchestrator | 2025-01-16 14:56:53.855614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:53.855722 | orchestrator | Thursday 16 January 2025 14:56:53 +0000 (0:00:00.129) 0:00:20.364 ****** 2025-01-16 14:56:53.855762 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:53.985894 | orchestrator | 2025-01-16 14:56:53.986011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:53.986073 | orchestrator | Thursday 16 January 2025 14:56:53 +0000 (0:00:00.127) 0:00:20.491 ****** 2025-01-16 14:56:53.986098 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:53.986226 | orchestrator | 2025-01-16 14:56:53.986249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:53.986526 | orchestrator | Thursday 16 January 2025 14:56:53 +0000 (0:00:00.128) 0:00:20.620 ****** 2025-01-16 14:56:54.113783 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:54.114149 | orchestrator | 2025-01-16 14:56:54.114185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:54.116237 | orchestrator | Thursday 16 January 2025 14:56:54 +0000 (0:00:00.130) 0:00:20.750 ****** 2025-01-16 14:56:54.391989 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e) 2025-01-16 14:56:54.392254 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e) 2025-01-16 14:56:54.392296 | orchestrator | 2025-01-16 14:56:54.393413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:54.393479 | orchestrator | Thursday 16 January 2025 14:56:54 +0000 (0:00:00.278) 0:00:21.028 ****** 2025-01-16 14:56:54.675718 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0aac5059-2a3a-4141-840f-fb09a7465e72) 2025-01-16 14:56:54.675861 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0aac5059-2a3a-4141-840f-fb09a7465e72) 2025-01-16 14:56:54.675880 | orchestrator | 2025-01-16 14:56:54.676259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:54.676439 | orchestrator | Thursday 16 January 2025 14:56:54 +0000 (0:00:00.282) 0:00:21.311 ****** 2025-01-16 14:56:54.965384 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_97685de2-31d7-40a6-8026-91294c9f6af1) 2025-01-16 14:56:54.965619 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_97685de2-31d7-40a6-8026-91294c9f6af1) 2025-01-16 14:56:54.965656 | orchestrator | 2025-01-16 14:56:54.966138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:54.966229 | orchestrator | Thursday 16 January 2025 14:56:54 +0000 (0:00:00.289) 0:00:21.601 ****** 2025-01-16 14:56:55.377342 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d740be6b-1b5d-4ad1-85aa-7275c0983c2d) 2025-01-16 14:56:55.379211 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d740be6b-1b5d-4ad1-85aa-7275c0983c2d) 2025-01-16 14:56:55.379471 | orchestrator | 2025-01-16 14:56:55.379773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:56:55.380069 | orchestrator | Thursday 16 January 2025 14:56:55 +0000 (0:00:00.413) 0:00:22.014 ****** 2025-01-16 14:56:55.816990 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-01-16 14:56:55.817583 | orchestrator | 2025-01-16 14:56:56.083283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:56.083481 | orchestrator | Thursday 16 January 2025 14:56:55 +0000 (0:00:00.439) 0:00:22.453 ****** 2025-01-16 14:56:56.083522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-01-16 14:56:56.083669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-01-16 14:56:56.083694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-01-16 14:56:56.084013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-01-16 14:56:56.084425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-01-16 14:56:56.084677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-01-16 14:56:56.085483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-01-16 14:56:56.085711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-01-16 14:56:56.085747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-01-16 14:56:56.086173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-01-16 14:56:56.086359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-01-16 14:56:56.086827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-01-16 14:56:56.087410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-01-16 14:56:56.087556 | orchestrator | 2025-01-16 14:56:56.087720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:56.088038 | orchestrator | Thursday 16 January 2025 14:56:56 +0000 (0:00:00.265) 0:00:22.719 ****** 2025-01-16 14:56:56.219791 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:56.220084 | orchestrator | 2025-01-16 14:56:56.220111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:56.220133 | orchestrator | Thursday 16 January 2025 14:56:56 +0000 (0:00:00.136) 0:00:22.856 ****** 2025-01-16 14:56:56.355976 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:56.356135 | orchestrator | 2025-01-16 14:56:56.356153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:56.356168 | orchestrator | Thursday 16 January 2025 14:56:56 +0000 (0:00:00.136) 0:00:22.992 ****** 2025-01-16 14:56:56.490426 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:56.490616 | orchestrator | 2025-01-16 14:56:56.490675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:56.490947 | orchestrator | Thursday 16 January 2025 14:56:56 +0000 (0:00:00.133) 0:00:23.126 ****** 2025-01-16 14:56:56.622425 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:56.622570 | orchestrator | 2025-01-16 14:56:56.755543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:56.755656 | orchestrator | Thursday 16 January 2025 14:56:56 +0000 (0:00:00.132) 0:00:23.259 ****** 2025-01-16 14:56:56.755686 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:56.757134 | orchestrator | 2025-01-16 14:56:56.888141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:56.888286 | orchestrator | Thursday 16 January 2025 14:56:56 +0000 (0:00:00.133) 0:00:23.392 ****** 2025-01-16 14:56:56.888328 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:56.888436 | orchestrator | 2025-01-16 14:56:56.888462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:57.029059 | orchestrator | Thursday 16 January 2025 14:56:56 +0000 (0:00:00.132) 0:00:23.524 ****** 2025-01-16 14:56:57.029225 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:57.171478 | orchestrator | 2025-01-16 14:56:57.171599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:57.171619 | orchestrator | Thursday 16 January 2025 14:56:57 +0000 (0:00:00.140) 0:00:23.664 ****** 2025-01-16 14:56:57.171650 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:57.845342 | orchestrator | 2025-01-16 14:56:57.845538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:57.845620 | orchestrator | Thursday 16 January 2025 14:56:57 +0000 (0:00:00.140) 0:00:23.805 ****** 2025-01-16 14:56:57.845668 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-01-16 14:56:57.845797 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-01-16 14:56:57.845985 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-01-16 14:56:57.846254 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-01-16 14:56:57.846289 | orchestrator | 2025-01-16 14:56:57.846321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:57.979687 | orchestrator | Thursday 16 January 2025 14:56:57 +0000 (0:00:00.675) 0:00:24.480 ****** 2025-01-16 14:56:57.979869 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:57.979956 | orchestrator | 2025-01-16 14:56:57.979977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:58.114549 | orchestrator | Thursday 16 January 2025 14:56:57 +0000 (0:00:00.135) 0:00:24.616 ****** 2025-01-16 14:56:58.114647 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:58.246984 | orchestrator | 2025-01-16 14:56:58.247085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:58.247100 | orchestrator | Thursday 16 January 2025 14:56:58 +0000 (0:00:00.134) 0:00:24.751 ****** 2025-01-16 14:56:58.247127 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:58.247202 | orchestrator | 2025-01-16 14:56:58.247227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:56:58.247254 | orchestrator | Thursday 16 January 2025 14:56:58 +0000 (0:00:00.132) 0:00:24.884 ****** 2025-01-16 14:56:58.382468 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:58.382702 | orchestrator | 2025-01-16 14:56:58.382783 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-01-16 14:56:58.382799 | orchestrator | Thursday 16 January 2025 14:56:58 +0000 (0:00:00.133) 0:00:25.017 ****** 2025-01-16 14:56:58.491733 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-01-16 14:56:58.491987 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-01-16 14:56:58.492032 | orchestrator | 2025-01-16 14:56:58.492061 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-01-16 14:56:58.492155 | orchestrator | Thursday 16 January 2025 14:56:58 +0000 (0:00:00.111) 0:00:25.128 ****** 2025-01-16 14:56:58.577743 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:58.578278 | orchestrator | 2025-01-16 14:56:58.578400 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-01-16 14:56:58.578632 | orchestrator | Thursday 16 January 2025 14:56:58 +0000 (0:00:00.086) 0:00:25.214 ****** 2025-01-16 14:56:58.661878 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:58.662004 | orchestrator | 2025-01-16 14:56:58.662055 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-01-16 14:56:58.662070 | orchestrator | Thursday 16 January 2025 14:56:58 +0000 (0:00:00.084) 0:00:25.298 ****** 2025-01-16 14:56:58.742356 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:58.742560 | orchestrator | 2025-01-16 14:56:58.743489 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-01-16 14:56:58.743613 | orchestrator | Thursday 16 January 2025 14:56:58 +0000 (0:00:00.080) 0:00:25.379 ****** 2025-01-16 14:56:58.826120 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:56:58.826241 | orchestrator | 2025-01-16 14:56:58.826256 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-01-16 14:56:58.826268 | orchestrator | Thursday 16 January 2025 14:56:58 +0000 (0:00:00.083) 0:00:25.463 ****** 2025-01-16 14:56:58.938319 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53007ac5-07c2-53cd-add6-e57729925218'}}) 2025-01-16 14:56:58.938475 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54c8019f-0033-5b40-9c4f-7f2e43f78b89'}}) 2025-01-16 14:56:58.938493 | orchestrator | 2025-01-16 14:56:58.938508 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-01-16 14:56:58.938706 | orchestrator | Thursday 16 January 2025 14:56:58 +0000 (0:00:00.112) 0:00:25.575 ****** 2025-01-16 14:56:59.148160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53007ac5-07c2-53cd-add6-e57729925218'}})  2025-01-16 14:56:59.148360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54c8019f-0033-5b40-9c4f-7f2e43f78b89'}})  2025-01-16 14:56:59.148408 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:59.148429 | orchestrator | 2025-01-16 14:56:59.148562 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-01-16 14:56:59.148585 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.209) 0:00:25.784 ****** 2025-01-16 14:56:59.246735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53007ac5-07c2-53cd-add6-e57729925218'}})  2025-01-16 14:56:59.343738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54c8019f-0033-5b40-9c4f-7f2e43f78b89'}})  2025-01-16 14:56:59.343874 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:59.343888 | orchestrator | 2025-01-16 14:56:59.343904 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-01-16 14:56:59.343913 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.098) 0:00:25.883 ****** 2025-01-16 14:56:59.343933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53007ac5-07c2-53cd-add6-e57729925218'}})  2025-01-16 14:56:59.343995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54c8019f-0033-5b40-9c4f-7f2e43f78b89'}})  2025-01-16 14:56:59.344006 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:59.344016 | orchestrator | 2025-01-16 14:56:59.433520 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-01-16 14:56:59.433589 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.097) 0:00:25.980 ****** 2025-01-16 14:56:59.433607 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:56:59.524895 | orchestrator | 2025-01-16 14:56:59.524962 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-01-16 14:56:59.524968 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.089) 0:00:26.070 ****** 2025-01-16 14:56:59.524981 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:56:59.525027 | orchestrator | 2025-01-16 14:56:59.525039 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-01-16 14:56:59.525050 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.090) 0:00:26.161 ****** 2025-01-16 14:56:59.609630 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:59.609736 | orchestrator | 2025-01-16 14:56:59.609749 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-01-16 14:56:59.609809 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.085) 0:00:26.246 ****** 2025-01-16 14:56:59.693005 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:59.775288 | orchestrator | 2025-01-16 14:56:59.775398 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-01-16 14:56:59.775418 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.083) 0:00:26.329 ****** 2025-01-16 14:56:59.775452 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:56:59.863292 | orchestrator | 2025-01-16 14:56:59.863392 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-01-16 14:56:59.863408 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.082) 0:00:26.412 ****** 2025-01-16 14:56:59.863432 | orchestrator | ok: [testbed-node-5] => { 2025-01-16 14:56:59.863566 | orchestrator |  "ceph_osd_devices": { 2025-01-16 14:56:59.863587 | orchestrator |  "sdb": { 2025-01-16 14:56:59.863599 | orchestrator |  "osd_lvm_uuid": "53007ac5-07c2-53cd-add6-e57729925218" 2025-01-16 14:56:59.863614 | orchestrator |  }, 2025-01-16 14:56:59.863679 | orchestrator |  "sdc": { 2025-01-16 14:56:59.863696 | orchestrator |  "osd_lvm_uuid": "54c8019f-0033-5b40-9c4f-7f2e43f78b89" 2025-01-16 14:56:59.863963 | orchestrator |  } 2025-01-16 14:56:59.864055 | orchestrator |  } 2025-01-16 14:56:59.864220 | orchestrator | } 2025-01-16 14:56:59.864321 | orchestrator | 2025-01-16 14:56:59.864381 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-01-16 14:56:59.864550 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.088) 0:00:26.500 ****** 2025-01-16 14:56:59.945499 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:57:00.027926 | orchestrator | 2025-01-16 14:57:00.028031 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-01-16 14:57:00.028051 | orchestrator | Thursday 16 January 2025 14:56:59 +0000 (0:00:00.082) 0:00:26.582 ****** 2025-01-16 14:57:00.028081 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:57:00.108473 | orchestrator | 2025-01-16 14:57:00.108562 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-01-16 14:57:00.108580 | orchestrator | Thursday 16 January 2025 14:57:00 +0000 (0:00:00.082) 0:00:26.664 ****** 2025-01-16 14:57:00.108608 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:57:00.411815 | orchestrator | 2025-01-16 14:57:00.411952 | orchestrator | TASK [Print configuration data] ************************************************ 2025-01-16 14:57:00.411975 | orchestrator | Thursday 16 January 2025 14:57:00 +0000 (0:00:00.080) 0:00:26.745 ****** 2025-01-16 14:57:00.412038 | orchestrator | changed: [testbed-node-5] => { 2025-01-16 14:57:00.412121 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-01-16 14:57:00.412177 | orchestrator |  "ceph_osd_devices": { 2025-01-16 14:57:00.412194 | orchestrator |  "sdb": { 2025-01-16 14:57:00.412209 | orchestrator |  "osd_lvm_uuid": "53007ac5-07c2-53cd-add6-e57729925218" 2025-01-16 14:57:00.412248 | orchestrator |  }, 2025-01-16 14:57:00.412269 | orchestrator |  "sdc": { 2025-01-16 14:57:00.412332 | orchestrator |  "osd_lvm_uuid": "54c8019f-0033-5b40-9c4f-7f2e43f78b89" 2025-01-16 14:57:00.412353 | orchestrator |  } 2025-01-16 14:57:00.412587 | orchestrator |  }, 2025-01-16 14:57:00.412620 | orchestrator |  "lvm_volumes": [ 2025-01-16 14:57:00.412679 | orchestrator |  { 2025-01-16 14:57:00.412909 | orchestrator |  "data": "osd-block-53007ac5-07c2-53cd-add6-e57729925218", 2025-01-16 14:57:00.414680 | orchestrator |  "data_vg": "ceph-53007ac5-07c2-53cd-add6-e57729925218" 2025-01-16 14:57:00.415083 | orchestrator |  }, 2025-01-16 14:57:00.415123 | orchestrator |  { 2025-01-16 14:57:00.415146 | orchestrator |  "data": "osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89", 2025-01-16 14:57:00.415167 | orchestrator |  "data_vg": "ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89" 2025-01-16 14:57:00.415186 | orchestrator |  } 2025-01-16 14:57:00.415199 | orchestrator |  ] 2025-01-16 14:57:00.415230 | orchestrator |  } 2025-01-16 14:57:00.415251 | orchestrator | } 2025-01-16 14:57:00.415279 | orchestrator | 2025-01-16 14:57:01.101170 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-01-16 14:57:01.101275 | orchestrator | Thursday 16 January 2025 14:57:00 +0000 (0:00:00.302) 0:00:27.048 ****** 2025-01-16 14:57:01.101307 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-01-16 14:57:01.101742 | orchestrator | 2025-01-16 14:57:01.101777 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:57:01.102094 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-01-16 14:57:01.102120 | orchestrator | 2025-01-16 14:57:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:57:01.102134 | orchestrator | 2025-01-16 14:57:01 | INFO  | Please wait and do not abort execution. 2025-01-16 14:57:01.102152 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-01-16 14:57:01.102356 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-01-16 14:57:01.102667 | orchestrator | 2025-01-16 14:57:01.102695 | orchestrator | 2025-01-16 14:57:01.103119 | orchestrator | 2025-01-16 14:57:01.103289 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:57:01.103401 | orchestrator | Thursday 16 January 2025 14:57:01 +0000 (0:00:00.688) 0:00:27.737 ****** 2025-01-16 14:57:01.103658 | orchestrator | =============================================================================== 2025-01-16 14:57:01.103912 | orchestrator | Write configuration file ------------------------------------------------ 3.16s 2025-01-16 14:57:01.104173 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2025-01-16 14:57:01.104311 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-01-16 14:57:01.104671 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-01-16 14:57:01.105003 | orchestrator | Print configuration data ------------------------------------------------ 0.67s 2025-01-16 14:57:01.105155 | orchestrator | Add known partitions to the list of available block devices ------------- 0.55s 2025-01-16 14:57:01.105384 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.49s 2025-01-16 14:57:01.105420 | orchestrator | Add known links to the list of available block devices ------------------ 0.49s 2025-01-16 14:57:01.105614 | orchestrator | Get initial list of available block devices ----------------------------- 0.46s 2025-01-16 14:57:01.105705 | orchestrator | Add known links to the list of available block devices ------------------ 0.44s 2025-01-16 14:57:01.105938 | orchestrator | Add known partitions to the list of available block devices ------------- 0.44s 2025-01-16 14:57:01.106133 | orchestrator | Add known links to the list of available block devices ------------------ 0.44s 2025-01-16 14:57:01.106164 | orchestrator | Add known links to the list of available block devices ------------------ 0.41s 2025-01-16 14:57:01.106302 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.41s 2025-01-16 14:57:01.106482 | orchestrator | Add known partitions to the list of available block devices ------------- 0.40s 2025-01-16 14:57:01.106575 | orchestrator | Add known links to the list of available block devices ------------------ 0.40s 2025-01-16 14:57:01.106653 | orchestrator | Set DB devices config data ---------------------------------------------- 0.38s 2025-01-16 14:57:01.106900 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.37s 2025-01-16 14:57:01.107069 | orchestrator | Print WAL devices ------------------------------------------------------- 0.36s 2025-01-16 14:57:01.107154 | orchestrator | Add known partitions to the list of available block devices ------------- 0.36s 2025-01-16 14:57:02.418380 | orchestrator | 2025-01-16 14:57:02 | INFO  | Task eded1a28-bee3-4a5e-acfb-89cbce9309e4 is running in background. Output coming soon. 2025-01-16 14:57:35.065265 | orchestrator | 2025-01-16 14:57:29 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-01-16 14:57:36.130695 | orchestrator | 2025-01-16 14:57:29 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-01-16 14:57:36.130788 | orchestrator | 2025-01-16 14:57:29 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-01-16 14:57:36.130797 | orchestrator | 2025-01-16 14:57:29 | INFO  | Handling group overwrites in 99-overwrite 2025-01-16 14:57:36.130812 | orchestrator | 2025-01-16 14:57:29 | INFO  | Removing group ceph-mds from 50-ceph 2025-01-16 14:57:36.130826 | orchestrator | 2025-01-16 14:57:29 | INFO  | Removing group ceph-rgw from 50-ceph 2025-01-16 14:57:36.130831 | orchestrator | 2025-01-16 14:57:29 | INFO  | Removing group netbird:children from 50-infrastruture 2025-01-16 14:57:36.130837 | orchestrator | 2025-01-16 14:57:29 | INFO  | Removing group storage:children from 50-kolla 2025-01-16 14:57:36.130868 | orchestrator | 2025-01-16 14:57:29 | INFO  | Removing group frr:children from 60-generic 2025-01-16 14:57:36.130874 | orchestrator | 2025-01-16 14:57:29 | INFO  | Handling group overwrites in 20-roles 2025-01-16 14:57:36.130880 | orchestrator | 2025-01-16 14:57:29 | INFO  | Removing group k3s_node from 50-infrastruture 2025-01-16 14:57:36.130900 | orchestrator | 2025-01-16 14:57:29 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-01-16 14:57:36.130905 | orchestrator | 2025-01-16 14:57:34 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-01-16 14:57:36.130921 | orchestrator | 2025-01-16 14:57:36 | INFO  | Task 473adc3b-fb63-4a87-945f-5b13fe4229fc (ceph-create-lvm-devices) was prepared for execution. 2025-01-16 14:57:38.135531 | orchestrator | 2025-01-16 14:57:36 | INFO  | It takes a moment until task 473adc3b-fb63-4a87-945f-5b13fe4229fc (ceph-create-lvm-devices) has been started and output is visible here. 2025-01-16 14:57:38.135638 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-01-16 14:57:38.492734 | orchestrator | 2025-01-16 14:57:38.492887 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-01-16 14:57:38.492901 | orchestrator | 2025-01-16 14:57:38.492912 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-01-16 14:57:38.643312 | orchestrator | Thursday 16 January 2025 14:57:38 +0000 (0:00:00.310) 0:00:00.310 ****** 2025-01-16 14:57:38.643431 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 14:57:38.643620 | orchestrator | 2025-01-16 14:57:38.643637 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-01-16 14:57:38.643651 | orchestrator | Thursday 16 January 2025 14:57:38 +0000 (0:00:00.151) 0:00:00.462 ****** 2025-01-16 14:57:38.780478 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:57:39.132388 | orchestrator | 2025-01-16 14:57:39.132542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:39.132562 | orchestrator | Thursday 16 January 2025 14:57:38 +0000 (0:00:00.137) 0:00:00.600 ****** 2025-01-16 14:57:39.132586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-01-16 14:57:39.132644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-01-16 14:57:39.132659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-01-16 14:57:39.133290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-01-16 14:57:39.133481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-01-16 14:57:39.133722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-01-16 14:57:39.134144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-01-16 14:57:39.136226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-01-16 14:57:39.136297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-01-16 14:57:39.253039 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-01-16 14:57:39.253219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-01-16 14:57:39.253233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-01-16 14:57:39.253239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-01-16 14:57:39.253245 | orchestrator | 2025-01-16 14:57:39.253251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:39.253258 | orchestrator | Thursday 16 January 2025 14:57:39 +0000 (0:00:00.352) 0:00:00.952 ****** 2025-01-16 14:57:39.253279 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:39.253426 | orchestrator | 2025-01-16 14:57:39.253461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:39.370700 | orchestrator | Thursday 16 January 2025 14:57:39 +0000 (0:00:00.120) 0:00:01.073 ****** 2025-01-16 14:57:39.370813 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:39.371047 | orchestrator | 2025-01-16 14:57:39.371162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:39.371203 | orchestrator | Thursday 16 January 2025 14:57:39 +0000 (0:00:00.117) 0:00:01.190 ****** 2025-01-16 14:57:39.488268 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:39.488491 | orchestrator | 2025-01-16 14:57:39.488531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:39.488562 | orchestrator | Thursday 16 January 2025 14:57:39 +0000 (0:00:00.116) 0:00:01.307 ****** 2025-01-16 14:57:39.609294 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:39.609444 | orchestrator | 2025-01-16 14:57:39.609463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:39.609480 | orchestrator | Thursday 16 January 2025 14:57:39 +0000 (0:00:00.121) 0:00:01.429 ****** 2025-01-16 14:57:39.734376 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:39.735486 | orchestrator | 2025-01-16 14:57:39.735524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:39.735548 | orchestrator | Thursday 16 January 2025 14:57:39 +0000 (0:00:00.124) 0:00:01.554 ****** 2025-01-16 14:57:39.893124 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:39.893293 | orchestrator | 2025-01-16 14:57:39.893321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:39.893402 | orchestrator | Thursday 16 January 2025 14:57:39 +0000 (0:00:00.158) 0:00:01.712 ****** 2025-01-16 14:57:40.034785 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:40.158903 | orchestrator | 2025-01-16 14:57:40.159024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:40.159040 | orchestrator | Thursday 16 January 2025 14:57:40 +0000 (0:00:00.141) 0:00:01.854 ****** 2025-01-16 14:57:40.159064 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:40.570940 | orchestrator | 2025-01-16 14:57:40.571084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:40.571099 | orchestrator | Thursday 16 January 2025 14:57:40 +0000 (0:00:00.124) 0:00:01.978 ****** 2025-01-16 14:57:40.571121 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b) 2025-01-16 14:57:41.046334 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b) 2025-01-16 14:57:41.046424 | orchestrator | 2025-01-16 14:57:41.046433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:41.046439 | orchestrator | Thursday 16 January 2025 14:57:40 +0000 (0:00:00.412) 0:00:02.390 ****** 2025-01-16 14:57:41.046455 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a3fa75ed-12ad-4d98-b1e3-06058efbf95a) 2025-01-16 14:57:41.366502 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a3fa75ed-12ad-4d98-b1e3-06058efbf95a) 2025-01-16 14:57:41.366616 | orchestrator | 2025-01-16 14:57:41.366633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:41.366645 | orchestrator | Thursday 16 January 2025 14:57:41 +0000 (0:00:00.475) 0:00:02.865 ****** 2025-01-16 14:57:41.366671 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0646438b-3566-4bd7-ac9f-c7444a60ff3f) 2025-01-16 14:57:41.657573 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0646438b-3566-4bd7-ac9f-c7444a60ff3f) 2025-01-16 14:57:41.657694 | orchestrator | 2025-01-16 14:57:41.657719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:41.657740 | orchestrator | Thursday 16 January 2025 14:57:41 +0000 (0:00:00.320) 0:00:03.186 ****** 2025-01-16 14:57:41.657778 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_72b30f3d-ea4f-4fbe-a722-d77662b0ee19) 2025-01-16 14:57:41.657812 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_72b30f3d-ea4f-4fbe-a722-d77662b0ee19) 2025-01-16 14:57:41.657835 | orchestrator | 2025-01-16 14:57:41.868476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:41.868644 | orchestrator | Thursday 16 January 2025 14:57:41 +0000 (0:00:00.291) 0:00:03.477 ****** 2025-01-16 14:57:41.868680 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-01-16 14:57:42.174729 | orchestrator | 2025-01-16 14:57:42.175033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:42.175121 | orchestrator | Thursday 16 January 2025 14:57:41 +0000 (0:00:00.210) 0:00:03.688 ****** 2025-01-16 14:57:42.175179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-01-16 14:57:42.175313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-01-16 14:57:42.175337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-01-16 14:57:42.175389 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-01-16 14:57:42.175406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-01-16 14:57:42.175427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-01-16 14:57:42.175482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-01-16 14:57:42.175498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-01-16 14:57:42.175516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-01-16 14:57:42.177087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-01-16 14:57:42.305141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-01-16 14:57:42.305259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-01-16 14:57:42.305275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-01-16 14:57:42.305287 | orchestrator | 2025-01-16 14:57:42.305299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:42.305310 | orchestrator | Thursday 16 January 2025 14:57:42 +0000 (0:00:00.305) 0:00:03.993 ****** 2025-01-16 14:57:42.305336 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:42.431488 | orchestrator | 2025-01-16 14:57:42.431664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:42.431692 | orchestrator | Thursday 16 January 2025 14:57:42 +0000 (0:00:00.129) 0:00:04.123 ****** 2025-01-16 14:57:42.431722 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:42.431793 | orchestrator | 2025-01-16 14:57:42.432364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:42.432626 | orchestrator | Thursday 16 January 2025 14:57:42 +0000 (0:00:00.126) 0:00:04.250 ****** 2025-01-16 14:57:42.570394 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:42.572245 | orchestrator | 2025-01-16 14:57:42.572286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:42.572299 | orchestrator | Thursday 16 January 2025 14:57:42 +0000 (0:00:00.139) 0:00:04.390 ****** 2025-01-16 14:57:42.694253 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:42.694356 | orchestrator | 2025-01-16 14:57:42.694365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:42.694374 | orchestrator | Thursday 16 January 2025 14:57:42 +0000 (0:00:00.123) 0:00:04.514 ****** 2025-01-16 14:57:42.818714 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:42.818837 | orchestrator | 2025-01-16 14:57:42.818863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:42.818874 | orchestrator | Thursday 16 January 2025 14:57:42 +0000 (0:00:00.123) 0:00:04.637 ****** 2025-01-16 14:57:43.150146 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:43.150357 | orchestrator | 2025-01-16 14:57:43.280316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:43.280444 | orchestrator | Thursday 16 January 2025 14:57:43 +0000 (0:00:00.332) 0:00:04.970 ****** 2025-01-16 14:57:43.280484 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:43.280541 | orchestrator | 2025-01-16 14:57:43.280553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:43.280564 | orchestrator | Thursday 16 January 2025 14:57:43 +0000 (0:00:00.128) 0:00:05.098 ****** 2025-01-16 14:57:43.404709 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:43.404841 | orchestrator | 2025-01-16 14:57:43.404916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:43.404949 | orchestrator | Thursday 16 January 2025 14:57:43 +0000 (0:00:00.126) 0:00:05.224 ****** 2025-01-16 14:57:43.851885 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-01-16 14:57:43.852054 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-01-16 14:57:43.852078 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-01-16 14:57:43.852097 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-01-16 14:57:43.852116 | orchestrator | 2025-01-16 14:57:43.852136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:43.852162 | orchestrator | Thursday 16 January 2025 14:57:43 +0000 (0:00:00.446) 0:00:05.671 ****** 2025-01-16 14:57:43.975520 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:43.976120 | orchestrator | 2025-01-16 14:57:44.100581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:44.100684 | orchestrator | Thursday 16 January 2025 14:57:43 +0000 (0:00:00.123) 0:00:05.795 ****** 2025-01-16 14:57:44.100704 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:44.101078 | orchestrator | 2025-01-16 14:57:44.101603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:44.101626 | orchestrator | Thursday 16 January 2025 14:57:44 +0000 (0:00:00.124) 0:00:05.919 ****** 2025-01-16 14:57:44.226219 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:44.357620 | orchestrator | 2025-01-16 14:57:44.357761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:44.357793 | orchestrator | Thursday 16 January 2025 14:57:44 +0000 (0:00:00.126) 0:00:06.046 ****** 2025-01-16 14:57:44.357836 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:44.438819 | orchestrator | 2025-01-16 14:57:44.439056 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-01-16 14:57:44.439098 | orchestrator | Thursday 16 January 2025 14:57:44 +0000 (0:00:00.130) 0:00:06.176 ****** 2025-01-16 14:57:44.439132 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:44.439242 | orchestrator | 2025-01-16 14:57:44.439267 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-01-16 14:57:44.571680 | orchestrator | Thursday 16 January 2025 14:57:44 +0000 (0:00:00.081) 0:00:06.258 ****** 2025-01-16 14:57:44.571925 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53488163-bd74-50cc-bfa0-f1a94ed01f33'}}) 2025-01-16 14:57:44.572116 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '562c7eeb-0cc2-5747-a030-082dcf3dd7cc'}}) 2025-01-16 14:57:44.572142 | orchestrator | 2025-01-16 14:57:44.572165 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-01-16 14:57:45.805810 | orchestrator | Thursday 16 January 2025 14:57:44 +0000 (0:00:00.131) 0:00:06.390 ****** 2025-01-16 14:57:45.805919 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'}) 2025-01-16 14:57:45.805961 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'}) 2025-01-16 14:57:45.806056 | orchestrator | 2025-01-16 14:57:45.806067 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-01-16 14:57:45.806186 | orchestrator | Thursday 16 January 2025 14:57:45 +0000 (0:00:01.233) 0:00:07.624 ****** 2025-01-16 14:57:45.904136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:45.904297 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:45.904318 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:45.904412 | orchestrator | 2025-01-16 14:57:45.904526 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-01-16 14:57:45.904743 | orchestrator | Thursday 16 January 2025 14:57:45 +0000 (0:00:00.099) 0:00:07.723 ****** 2025-01-16 14:57:46.683000 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'}) 2025-01-16 14:57:46.783365 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'}) 2025-01-16 14:57:46.783488 | orchestrator | 2025-01-16 14:57:46.783509 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-01-16 14:57:46.783525 | orchestrator | Thursday 16 January 2025 14:57:46 +0000 (0:00:00.778) 0:00:08.502 ****** 2025-01-16 14:57:46.783558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:46.869024 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:46.869154 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:46.869179 | orchestrator | 2025-01-16 14:57:46.869194 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-01-16 14:57:46.869209 | orchestrator | Thursday 16 January 2025 14:57:46 +0000 (0:00:00.099) 0:00:08.601 ****** 2025-01-16 14:57:46.869238 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:46.966753 | orchestrator | 2025-01-16 14:57:46.966841 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-01-16 14:57:46.966932 | orchestrator | Thursday 16 January 2025 14:57:46 +0000 (0:00:00.085) 0:00:08.687 ****** 2025-01-16 14:57:46.966958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:46.967005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:46.970580 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:47.054124 | orchestrator | 2025-01-16 14:57:47.054240 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-01-16 14:57:47.054251 | orchestrator | Thursday 16 January 2025 14:57:46 +0000 (0:00:00.098) 0:00:08.786 ****** 2025-01-16 14:57:47.054270 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:47.054301 | orchestrator | 2025-01-16 14:57:47.054309 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-01-16 14:57:47.054317 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.088) 0:00:08.874 ****** 2025-01-16 14:57:47.154679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:47.154958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:47.241236 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:47.241323 | orchestrator | 2025-01-16 14:57:47.241333 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-01-16 14:57:47.241340 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.100) 0:00:08.974 ****** 2025-01-16 14:57:47.241376 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:47.241414 | orchestrator | 2025-01-16 14:57:47.241422 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-01-16 14:57:47.241431 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.085) 0:00:09.060 ****** 2025-01-16 14:57:47.343209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:47.343398 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:47.343427 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:47.343450 | orchestrator | 2025-01-16 14:57:47.343501 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-01-16 14:57:47.343601 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.101) 0:00:09.161 ****** 2025-01-16 14:57:47.427436 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:57:47.616655 | orchestrator | 2025-01-16 14:57:47.616775 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-01-16 14:57:47.616793 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.085) 0:00:09.246 ****** 2025-01-16 14:57:47.616824 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:47.616995 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:47.617022 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:47.617240 | orchestrator | 2025-01-16 14:57:47.617268 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-01-16 14:57:47.617605 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.189) 0:00:09.436 ****** 2025-01-16 14:57:47.716474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:47.716905 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:47.817007 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:47.817131 | orchestrator | 2025-01-16 14:57:47.817149 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-01-16 14:57:47.817163 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.099) 0:00:09.536 ****** 2025-01-16 14:57:47.817194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:47.817315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:47.817339 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:47.817360 | orchestrator | 2025-01-16 14:57:47.904343 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-01-16 14:57:47.904439 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.099) 0:00:09.636 ****** 2025-01-16 14:57:47.904463 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:47.987129 | orchestrator | 2025-01-16 14:57:47.987220 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-01-16 14:57:47.987232 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.087) 0:00:09.723 ****** 2025-01-16 14:57:47.987270 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:48.071044 | orchestrator | 2025-01-16 14:57:48.071163 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-01-16 14:57:48.071182 | orchestrator | Thursday 16 January 2025 14:57:47 +0000 (0:00:00.083) 0:00:09.807 ****** 2025-01-16 14:57:48.071244 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:48.157179 | orchestrator | 2025-01-16 14:57:48.157287 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-01-16 14:57:48.157304 | orchestrator | Thursday 16 January 2025 14:57:48 +0000 (0:00:00.083) 0:00:09.890 ****** 2025-01-16 14:57:48.157335 | orchestrator | ok: [testbed-node-3] => { 2025-01-16 14:57:48.157419 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-01-16 14:57:48.157437 | orchestrator | } 2025-01-16 14:57:48.157450 | orchestrator | 2025-01-16 14:57:48.157469 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-01-16 14:57:48.243428 | orchestrator | Thursday 16 January 2025 14:57:48 +0000 (0:00:00.086) 0:00:09.977 ****** 2025-01-16 14:57:48.243565 | orchestrator | ok: [testbed-node-3] => { 2025-01-16 14:57:48.243674 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-01-16 14:57:48.243699 | orchestrator | } 2025-01-16 14:57:48.243715 | orchestrator | 2025-01-16 14:57:48.243738 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-01-16 14:57:48.331188 | orchestrator | Thursday 16 January 2025 14:57:48 +0000 (0:00:00.086) 0:00:10.063 ****** 2025-01-16 14:57:48.331336 | orchestrator | ok: [testbed-node-3] => { 2025-01-16 14:57:48.331464 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-01-16 14:57:48.331479 | orchestrator | } 2025-01-16 14:57:48.331492 | orchestrator | 2025-01-16 14:57:48.707142 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-01-16 14:57:48.707252 | orchestrator | Thursday 16 January 2025 14:57:48 +0000 (0:00:00.087) 0:00:10.150 ****** 2025-01-16 14:57:48.707282 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:57:48.994520 | orchestrator | 2025-01-16 14:57:48.994562 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-01-16 14:57:48.994570 | orchestrator | Thursday 16 January 2025 14:57:48 +0000 (0:00:00.376) 0:00:10.526 ****** 2025-01-16 14:57:48.994581 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:57:48.994733 | orchestrator | 2025-01-16 14:57:48.994745 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-01-16 14:57:48.995062 | orchestrator | Thursday 16 January 2025 14:57:48 +0000 (0:00:00.287) 0:00:10.814 ****** 2025-01-16 14:57:49.382187 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:57:49.468623 | orchestrator | 2025-01-16 14:57:49.468731 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-01-16 14:57:49.468747 | orchestrator | Thursday 16 January 2025 14:57:49 +0000 (0:00:00.386) 0:00:11.200 ****** 2025-01-16 14:57:49.468769 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:57:49.536950 | orchestrator | 2025-01-16 14:57:49.537086 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-01-16 14:57:49.537110 | orchestrator | Thursday 16 January 2025 14:57:49 +0000 (0:00:00.087) 0:00:11.288 ****** 2025-01-16 14:57:49.537145 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:49.603845 | orchestrator | 2025-01-16 14:57:49.603968 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-01-16 14:57:49.603980 | orchestrator | Thursday 16 January 2025 14:57:49 +0000 (0:00:00.068) 0:00:11.356 ****** 2025-01-16 14:57:49.604001 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:49.687193 | orchestrator | 2025-01-16 14:57:49.687304 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-01-16 14:57:49.687323 | orchestrator | Thursday 16 January 2025 14:57:49 +0000 (0:00:00.067) 0:00:11.423 ****** 2025-01-16 14:57:49.687356 | orchestrator | ok: [testbed-node-3] => { 2025-01-16 14:57:49.688245 | orchestrator |  "vgs_report": { 2025-01-16 14:57:49.688347 | orchestrator |  "vg": [] 2025-01-16 14:57:49.688362 | orchestrator |  } 2025-01-16 14:57:49.688373 | orchestrator | } 2025-01-16 14:57:49.688384 | orchestrator | 2025-01-16 14:57:49.688407 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-01-16 14:57:49.768379 | orchestrator | Thursday 16 January 2025 14:57:49 +0000 (0:00:00.082) 0:00:11.506 ****** 2025-01-16 14:57:49.768482 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:49.851299 | orchestrator | 2025-01-16 14:57:49.851440 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-01-16 14:57:49.851472 | orchestrator | Thursday 16 January 2025 14:57:49 +0000 (0:00:00.081) 0:00:11.588 ****** 2025-01-16 14:57:49.851516 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:49.932958 | orchestrator | 2025-01-16 14:57:49.933066 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-01-16 14:57:49.933080 | orchestrator | Thursday 16 January 2025 14:57:49 +0000 (0:00:00.082) 0:00:11.670 ****** 2025-01-16 14:57:49.933105 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:49.934157 | orchestrator | 2025-01-16 14:57:50.029552 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-01-16 14:57:50.029644 | orchestrator | Thursday 16 January 2025 14:57:49 +0000 (0:00:00.081) 0:00:11.752 ****** 2025-01-16 14:57:50.029669 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.117704 | orchestrator | 2025-01-16 14:57:50.117809 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-01-16 14:57:50.117820 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.096) 0:00:11.849 ****** 2025-01-16 14:57:50.117919 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.117973 | orchestrator | 2025-01-16 14:57:50.117983 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-01-16 14:57:50.117993 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.088) 0:00:11.937 ****** 2025-01-16 14:57:50.198654 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.281089 | orchestrator | 2025-01-16 14:57:50.281217 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-01-16 14:57:50.281237 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.081) 0:00:12.018 ****** 2025-01-16 14:57:50.281272 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.475778 | orchestrator | 2025-01-16 14:57:50.475984 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-01-16 14:57:50.475999 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.082) 0:00:12.100 ****** 2025-01-16 14:57:50.476021 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.476093 | orchestrator | 2025-01-16 14:57:50.476110 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-01-16 14:57:50.476126 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.193) 0:00:12.294 ****** 2025-01-16 14:57:50.560433 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.560555 | orchestrator | 2025-01-16 14:57:50.560578 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-01-16 14:57:50.560596 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.085) 0:00:12.380 ****** 2025-01-16 14:57:50.642978 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.723584 | orchestrator | 2025-01-16 14:57:50.723698 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-01-16 14:57:50.723716 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.082) 0:00:12.462 ****** 2025-01-16 14:57:50.723747 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.809665 | orchestrator | 2025-01-16 14:57:50.809773 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-01-16 14:57:50.809783 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.080) 0:00:12.543 ****** 2025-01-16 14:57:50.809801 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.809879 | orchestrator | 2025-01-16 14:57:50.810063 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-01-16 14:57:50.810079 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.085) 0:00:12.629 ****** 2025-01-16 14:57:50.893783 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.893976 | orchestrator | 2025-01-16 14:57:50.893992 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-01-16 14:57:50.894006 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.084) 0:00:12.713 ****** 2025-01-16 14:57:50.980794 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:50.980947 | orchestrator | 2025-01-16 14:57:50.980963 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-01-16 14:57:50.981112 | orchestrator | Thursday 16 January 2025 14:57:50 +0000 (0:00:00.086) 0:00:12.800 ****** 2025-01-16 14:57:51.085271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:51.085482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:51.085507 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:51.085524 | orchestrator | 2025-01-16 14:57:51.085563 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-01-16 14:57:51.085642 | orchestrator | Thursday 16 January 2025 14:57:51 +0000 (0:00:00.104) 0:00:12.905 ****** 2025-01-16 14:57:51.181846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:51.182143 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:51.182167 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:51.182184 | orchestrator | 2025-01-16 14:57:51.182472 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-01-16 14:57:51.182490 | orchestrator | Thursday 16 January 2025 14:57:51 +0000 (0:00:00.096) 0:00:13.001 ****** 2025-01-16 14:57:51.283835 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:51.286401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:51.286466 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:51.286503 | orchestrator | 2025-01-16 14:57:51.286561 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-01-16 14:57:51.286577 | orchestrator | Thursday 16 January 2025 14:57:51 +0000 (0:00:00.101) 0:00:13.103 ****** 2025-01-16 14:57:51.383086 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:51.383328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:51.383358 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:51.383384 | orchestrator | 2025-01-16 14:57:51.383731 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-01-16 14:57:51.384039 | orchestrator | Thursday 16 January 2025 14:57:51 +0000 (0:00:00.098) 0:00:13.201 ****** 2025-01-16 14:57:51.484014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:51.484194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:51.484670 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:51.484846 | orchestrator | 2025-01-16 14:57:51.484882 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-01-16 14:57:51.485309 | orchestrator | Thursday 16 January 2025 14:57:51 +0000 (0:00:00.101) 0:00:13.303 ****** 2025-01-16 14:57:51.691342 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:51.691623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:51.691673 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:51.691689 | orchestrator | 2025-01-16 14:57:51.691994 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-01-16 14:57:51.692156 | orchestrator | Thursday 16 January 2025 14:57:51 +0000 (0:00:00.207) 0:00:13.510 ****** 2025-01-16 14:57:51.795219 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:51.795387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:51.795406 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:51.795422 | orchestrator | 2025-01-16 14:57:51.795644 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-01-16 14:57:51.795836 | orchestrator | Thursday 16 January 2025 14:57:51 +0000 (0:00:00.103) 0:00:13.614 ****** 2025-01-16 14:57:51.896213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:51.896546 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:51.896581 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:51.896619 | orchestrator | 2025-01-16 14:57:51.896693 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-01-16 14:57:51.897013 | orchestrator | Thursday 16 January 2025 14:57:51 +0000 (0:00:00.100) 0:00:13.715 ****** 2025-01-16 14:57:52.188773 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:57:52.189027 | orchestrator | 2025-01-16 14:57:52.189057 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-01-16 14:57:52.189081 | orchestrator | Thursday 16 January 2025 14:57:52 +0000 (0:00:00.292) 0:00:14.008 ****** 2025-01-16 14:57:52.472026 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:57:52.561081 | orchestrator | 2025-01-16 14:57:52.561183 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-01-16 14:57:52.561196 | orchestrator | Thursday 16 January 2025 14:57:52 +0000 (0:00:00.282) 0:00:14.291 ****** 2025-01-16 14:57:52.561218 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:57:52.561260 | orchestrator | 2025-01-16 14:57:52.561271 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-01-16 14:57:52.561282 | orchestrator | Thursday 16 January 2025 14:57:52 +0000 (0:00:00.089) 0:00:14.381 ****** 2025-01-16 14:57:52.671154 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'vg_name': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'}) 2025-01-16 14:57:52.777267 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'vg_name': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'}) 2025-01-16 14:57:52.777352 | orchestrator | 2025-01-16 14:57:52.777360 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-01-16 14:57:52.777372 | orchestrator | Thursday 16 January 2025 14:57:52 +0000 (0:00:00.109) 0:00:14.490 ****** 2025-01-16 14:57:52.777388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:52.777428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:52.777435 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:52.777441 | orchestrator | 2025-01-16 14:57:52.777448 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-01-16 14:57:52.882487 | orchestrator | Thursday 16 January 2025 14:57:52 +0000 (0:00:00.106) 0:00:14.597 ****** 2025-01-16 14:57:52.882649 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:52.882742 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:52.882763 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:52.882781 | orchestrator | 2025-01-16 14:57:52.882804 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-01-16 14:57:52.986992 | orchestrator | Thursday 16 January 2025 14:57:52 +0000 (0:00:00.105) 0:00:14.702 ****** 2025-01-16 14:57:52.987129 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'})  2025-01-16 14:57:52.987214 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'})  2025-01-16 14:57:52.987232 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:57:52.987246 | orchestrator | 2025-01-16 14:57:52.987276 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-01-16 14:57:52.987294 | orchestrator | Thursday 16 January 2025 14:57:52 +0000 (0:00:00.104) 0:00:14.806 ****** 2025-01-16 14:57:53.528504 | orchestrator | ok: [testbed-node-3] => { 2025-01-16 14:57:53.528806 | orchestrator |  "lvm_report": { 2025-01-16 14:57:53.528943 | orchestrator |  "lv": [ 2025-01-16 14:57:53.528975 | orchestrator |  { 2025-01-16 14:57:53.529101 | orchestrator |  "lv_name": "osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33", 2025-01-16 14:57:53.529132 | orchestrator |  "vg_name": "ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33" 2025-01-16 14:57:53.529625 | orchestrator |  }, 2025-01-16 14:57:53.529755 | orchestrator |  { 2025-01-16 14:57:53.531097 | orchestrator |  "lv_name": "osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc", 2025-01-16 14:57:53.531913 | orchestrator |  "vg_name": "ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc" 2025-01-16 14:57:53.531964 | orchestrator |  } 2025-01-16 14:57:53.531982 | orchestrator |  ], 2025-01-16 14:57:53.531998 | orchestrator |  "pv": [ 2025-01-16 14:57:53.532013 | orchestrator |  { 2025-01-16 14:57:53.532027 | orchestrator |  "pv_name": "/dev/sdb", 2025-01-16 14:57:53.532043 | orchestrator |  "vg_name": "ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33" 2025-01-16 14:57:53.532058 | orchestrator |  }, 2025-01-16 14:57:53.532087 | orchestrator |  { 2025-01-16 14:57:53.532111 | orchestrator |  "pv_name": "/dev/sdc", 2025-01-16 14:57:53.533390 | orchestrator |  "vg_name": "ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc" 2025-01-16 14:57:53.533450 | orchestrator |  } 2025-01-16 14:57:53.533466 | orchestrator |  ] 2025-01-16 14:57:53.533487 | orchestrator |  } 2025-01-16 14:57:53.688623 | orchestrator | } 2025-01-16 14:57:53.688773 | orchestrator | 2025-01-16 14:57:53.688804 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-01-16 14:57:53.688831 | orchestrator | 2025-01-16 14:57:53.688941 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-01-16 14:57:53.688970 | orchestrator | Thursday 16 January 2025 14:57:53 +0000 (0:00:00.540) 0:00:15.347 ****** 2025-01-16 14:57:53.689017 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-01-16 14:57:53.836331 | orchestrator | 2025-01-16 14:57:53.836455 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-01-16 14:57:53.836474 | orchestrator | Thursday 16 January 2025 14:57:53 +0000 (0:00:00.160) 0:00:15.507 ****** 2025-01-16 14:57:53.836505 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:57:54.137199 | orchestrator | 2025-01-16 14:57:54.137303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:54.137317 | orchestrator | Thursday 16 January 2025 14:57:53 +0000 (0:00:00.147) 0:00:15.655 ****** 2025-01-16 14:57:54.137368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-01-16 14:57:54.137419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-01-16 14:57:54.137433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-01-16 14:57:54.137592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-01-16 14:57:54.137846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-01-16 14:57:54.138116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-01-16 14:57:54.138332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-01-16 14:57:54.140401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-01-16 14:57:54.140596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-01-16 14:57:54.140613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-01-16 14:57:54.140618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-01-16 14:57:54.140624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-01-16 14:57:54.140632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-01-16 14:57:54.140640 | orchestrator | 2025-01-16 14:57:54.140652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:54.140759 | orchestrator | Thursday 16 January 2025 14:57:54 +0000 (0:00:00.301) 0:00:15.957 ****** 2025-01-16 14:57:54.257394 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:54.257599 | orchestrator | 2025-01-16 14:57:54.257811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:54.257842 | orchestrator | Thursday 16 January 2025 14:57:54 +0000 (0:00:00.120) 0:00:16.077 ****** 2025-01-16 14:57:54.507982 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:54.508161 | orchestrator | 2025-01-16 14:57:54.508189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:54.508211 | orchestrator | Thursday 16 January 2025 14:57:54 +0000 (0:00:00.249) 0:00:16.327 ****** 2025-01-16 14:57:54.634736 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:54.763446 | orchestrator | 2025-01-16 14:57:54.763571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:54.763586 | orchestrator | Thursday 16 January 2025 14:57:54 +0000 (0:00:00.126) 0:00:16.453 ****** 2025-01-16 14:57:54.763610 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:54.763656 | orchestrator | 2025-01-16 14:57:54.763668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:54.763681 | orchestrator | Thursday 16 January 2025 14:57:54 +0000 (0:00:00.129) 0:00:16.583 ****** 2025-01-16 14:57:54.895114 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:55.036453 | orchestrator | 2025-01-16 14:57:55.036540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:55.036550 | orchestrator | Thursday 16 January 2025 14:57:54 +0000 (0:00:00.131) 0:00:16.714 ****** 2025-01-16 14:57:55.036577 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:55.036635 | orchestrator | 2025-01-16 14:57:55.036648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:55.162817 | orchestrator | Thursday 16 January 2025 14:57:55 +0000 (0:00:00.141) 0:00:16.856 ****** 2025-01-16 14:57:55.163020 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:55.163129 | orchestrator | 2025-01-16 14:57:55.163152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:55.163172 | orchestrator | Thursday 16 January 2025 14:57:55 +0000 (0:00:00.126) 0:00:16.982 ****** 2025-01-16 14:57:55.292419 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:55.574233 | orchestrator | 2025-01-16 14:57:55.574343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:55.574356 | orchestrator | Thursday 16 January 2025 14:57:55 +0000 (0:00:00.129) 0:00:17.112 ****** 2025-01-16 14:57:55.574376 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd) 2025-01-16 14:57:55.574416 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd) 2025-01-16 14:57:55.574427 | orchestrator | 2025-01-16 14:57:55.574623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:55.574679 | orchestrator | Thursday 16 January 2025 14:57:55 +0000 (0:00:00.281) 0:00:17.393 ****** 2025-01-16 14:57:55.863151 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d1e8c7e9-38c3-4780-8ab7-178f632f9eb8) 2025-01-16 14:57:55.863532 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d1e8c7e9-38c3-4780-8ab7-178f632f9eb8) 2025-01-16 14:57:55.863564 | orchestrator | 2025-01-16 14:57:56.262534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:56.262648 | orchestrator | Thursday 16 January 2025 14:57:55 +0000 (0:00:00.289) 0:00:17.682 ****** 2025-01-16 14:57:56.262671 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_511497a6-ce11-47ca-8c02-acccaddecbc9) 2025-01-16 14:57:56.741600 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_511497a6-ce11-47ca-8c02-acccaddecbc9) 2025-01-16 14:57:56.741748 | orchestrator | 2025-01-16 14:57:56.741773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:56.741788 | orchestrator | Thursday 16 January 2025 14:57:56 +0000 (0:00:00.398) 0:00:18.081 ****** 2025-01-16 14:57:56.741825 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f7bd705e-b5e0-4446-bf55-1dfa4188ee04) 2025-01-16 14:57:56.741951 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f7bd705e-b5e0-4446-bf55-1dfa4188ee04) 2025-01-16 14:57:56.741977 | orchestrator | 2025-01-16 14:57:56.741994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:57:56.742075 | orchestrator | Thursday 16 January 2025 14:57:56 +0000 (0:00:00.478) 0:00:18.560 ****** 2025-01-16 14:57:56.952991 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-01-16 14:57:56.954254 | orchestrator | 2025-01-16 14:57:57.257223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:57.257342 | orchestrator | Thursday 16 January 2025 14:57:56 +0000 (0:00:00.212) 0:00:18.773 ****** 2025-01-16 14:57:57.257369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-01-16 14:57:57.257569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-01-16 14:57:57.257592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-01-16 14:57:57.257677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-01-16 14:57:57.258083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-01-16 14:57:57.258296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-01-16 14:57:57.258609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-01-16 14:57:57.258795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-01-16 14:57:57.259171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-01-16 14:57:57.259302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-01-16 14:57:57.259569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-01-16 14:57:57.259769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-01-16 14:57:57.260041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-01-16 14:57:57.260244 | orchestrator | 2025-01-16 14:57:57.260518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:57.261095 | orchestrator | Thursday 16 January 2025 14:57:57 +0000 (0:00:00.303) 0:00:19.077 ****** 2025-01-16 14:57:57.387053 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:57.387677 | orchestrator | 2025-01-16 14:57:57.387801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:57.387847 | orchestrator | Thursday 16 January 2025 14:57:57 +0000 (0:00:00.129) 0:00:19.206 ****** 2025-01-16 14:57:57.513287 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:57.513435 | orchestrator | 2025-01-16 14:57:57.513452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:57.513465 | orchestrator | Thursday 16 January 2025 14:57:57 +0000 (0:00:00.126) 0:00:19.333 ****** 2025-01-16 14:57:57.639785 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:57.639939 | orchestrator | 2025-01-16 14:57:57.639951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:57.639960 | orchestrator | Thursday 16 January 2025 14:57:57 +0000 (0:00:00.126) 0:00:19.459 ****** 2025-01-16 14:57:57.768139 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:57.899766 | orchestrator | 2025-01-16 14:57:57.899933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:57.899953 | orchestrator | Thursday 16 January 2025 14:57:57 +0000 (0:00:00.127) 0:00:19.587 ****** 2025-01-16 14:57:57.899986 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:58.026150 | orchestrator | 2025-01-16 14:57:58.026253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:58.026267 | orchestrator | Thursday 16 January 2025 14:57:57 +0000 (0:00:00.129) 0:00:19.716 ****** 2025-01-16 14:57:58.026291 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:58.154435 | orchestrator | 2025-01-16 14:57:58.154573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:58.154600 | orchestrator | Thursday 16 January 2025 14:57:58 +0000 (0:00:00.129) 0:00:19.845 ****** 2025-01-16 14:57:58.154636 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:58.504357 | orchestrator | 2025-01-16 14:57:58.504467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:58.504483 | orchestrator | Thursday 16 January 2025 14:57:58 +0000 (0:00:00.128) 0:00:19.974 ****** 2025-01-16 14:57:58.504510 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:58.504567 | orchestrator | 2025-01-16 14:57:58.504581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:58.504596 | orchestrator | Thursday 16 January 2025 14:57:58 +0000 (0:00:00.349) 0:00:20.323 ****** 2025-01-16 14:57:58.938778 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-01-16 14:57:58.939017 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-01-16 14:57:58.939046 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-01-16 14:57:58.939073 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-01-16 14:57:58.939230 | orchestrator | 2025-01-16 14:57:58.939263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:58.939363 | orchestrator | Thursday 16 January 2025 14:57:58 +0000 (0:00:00.435) 0:00:20.758 ****** 2025-01-16 14:57:59.069551 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:59.195077 | orchestrator | 2025-01-16 14:57:59.195166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:59.195194 | orchestrator | Thursday 16 January 2025 14:57:59 +0000 (0:00:00.130) 0:00:20.888 ****** 2025-01-16 14:57:59.195210 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:59.195237 | orchestrator | 2025-01-16 14:57:59.195245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:59.195285 | orchestrator | Thursday 16 January 2025 14:57:59 +0000 (0:00:00.126) 0:00:21.015 ****** 2025-01-16 14:57:59.321276 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:59.321408 | orchestrator | 2025-01-16 14:57:59.321423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:57:59.321536 | orchestrator | Thursday 16 January 2025 14:57:59 +0000 (0:00:00.126) 0:00:21.141 ****** 2025-01-16 14:57:59.450728 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:59.450980 | orchestrator | 2025-01-16 14:57:59.451024 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-01-16 14:57:59.536944 | orchestrator | Thursday 16 January 2025 14:57:59 +0000 (0:00:00.128) 0:00:21.269 ****** 2025-01-16 14:57:59.536993 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:57:59.537024 | orchestrator | 2025-01-16 14:57:59.537037 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-01-16 14:57:59.537045 | orchestrator | Thursday 16 January 2025 14:57:59 +0000 (0:00:00.086) 0:00:21.356 ****** 2025-01-16 14:57:59.671072 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd9c27d09-d80a-5255-9afb-1d5e2e5f2f02'}}) 2025-01-16 14:57:59.671201 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e6463fb-b573-5867-8a5d-b884b3259bdd'}}) 2025-01-16 14:57:59.671216 | orchestrator | 2025-01-16 14:57:59.671229 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-01-16 14:57:59.671360 | orchestrator | Thursday 16 January 2025 14:57:59 +0000 (0:00:00.134) 0:00:21.490 ****** 2025-01-16 14:58:00.677179 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'}) 2025-01-16 14:58:00.774662 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'}) 2025-01-16 14:58:00.774782 | orchestrator | 2025-01-16 14:58:00.774794 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-01-16 14:58:00.774804 | orchestrator | Thursday 16 January 2025 14:58:00 +0000 (0:00:01.004) 0:00:22.495 ****** 2025-01-16 14:58:00.774824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:00.774959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:00.774975 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:00.774984 | orchestrator | 2025-01-16 14:58:00.774992 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-01-16 14:58:00.775003 | orchestrator | Thursday 16 January 2025 14:58:00 +0000 (0:00:00.096) 0:00:22.592 ****** 2025-01-16 14:58:01.436200 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'}) 2025-01-16 14:58:01.541029 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'}) 2025-01-16 14:58:01.541132 | orchestrator | 2025-01-16 14:58:01.541144 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-01-16 14:58:01.541173 | orchestrator | Thursday 16 January 2025 14:58:01 +0000 (0:00:00.662) 0:00:23.255 ****** 2025-01-16 14:58:01.541193 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:01.541242 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:01.541255 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:01.541299 | orchestrator | 2025-01-16 14:58:01.541452 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-01-16 14:58:01.541607 | orchestrator | Thursday 16 January 2025 14:58:01 +0000 (0:00:00.105) 0:00:23.360 ****** 2025-01-16 14:58:01.628436 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:01.629129 | orchestrator | 2025-01-16 14:58:01.629432 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-01-16 14:58:01.629685 | orchestrator | Thursday 16 January 2025 14:58:01 +0000 (0:00:00.087) 0:00:23.448 ****** 2025-01-16 14:58:01.735590 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:01.737068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:01.821547 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:01.821668 | orchestrator | 2025-01-16 14:58:01.821689 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-01-16 14:58:01.821740 | orchestrator | Thursday 16 January 2025 14:58:01 +0000 (0:00:00.107) 0:00:23.555 ****** 2025-01-16 14:58:01.821774 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:01.821850 | orchestrator | 2025-01-16 14:58:01.821905 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-01-16 14:58:01.928150 | orchestrator | Thursday 16 January 2025 14:58:01 +0000 (0:00:00.086) 0:00:23.641 ****** 2025-01-16 14:58:01.928252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:01.928302 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:01.928565 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:01.928587 | orchestrator | 2025-01-16 14:58:01.928728 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-01-16 14:58:01.929182 | orchestrator | Thursday 16 January 2025 14:58:01 +0000 (0:00:00.106) 0:00:23.747 ****** 2025-01-16 14:58:02.017934 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:02.018464 | orchestrator | 2025-01-16 14:58:02.120687 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-01-16 14:58:02.120835 | orchestrator | Thursday 16 January 2025 14:58:02 +0000 (0:00:00.089) 0:00:23.837 ****** 2025-01-16 14:58:02.120964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:02.121091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:02.121186 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:02.121208 | orchestrator | 2025-01-16 14:58:02.121229 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-01-16 14:58:02.121379 | orchestrator | Thursday 16 January 2025 14:58:02 +0000 (0:00:00.103) 0:00:23.940 ****** 2025-01-16 14:58:02.209359 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:02.209564 | orchestrator | 2025-01-16 14:58:02.209596 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-01-16 14:58:02.209621 | orchestrator | Thursday 16 January 2025 14:58:02 +0000 (0:00:00.088) 0:00:24.029 ****** 2025-01-16 14:58:02.319399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:02.322570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:02.323630 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:02.323851 | orchestrator | 2025-01-16 14:58:02.323978 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-01-16 14:58:02.423536 | orchestrator | Thursday 16 January 2025 14:58:02 +0000 (0:00:00.109) 0:00:24.138 ****** 2025-01-16 14:58:02.423697 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:02.423797 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:02.423834 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:02.423910 | orchestrator | 2025-01-16 14:58:02.423995 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-01-16 14:58:02.424020 | orchestrator | Thursday 16 January 2025 14:58:02 +0000 (0:00:00.104) 0:00:24.243 ****** 2025-01-16 14:58:02.540269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:02.540490 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:02.540668 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:02.540693 | orchestrator | 2025-01-16 14:58:02.540708 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-01-16 14:58:02.541017 | orchestrator | Thursday 16 January 2025 14:58:02 +0000 (0:00:00.116) 0:00:24.359 ****** 2025-01-16 14:58:02.776839 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:02.777154 | orchestrator | 2025-01-16 14:58:02.777191 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-01-16 14:58:02.861785 | orchestrator | Thursday 16 January 2025 14:58:02 +0000 (0:00:00.237) 0:00:24.596 ****** 2025-01-16 14:58:02.861986 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:02.862118 | orchestrator | 2025-01-16 14:58:02.862183 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-01-16 14:58:02.862241 | orchestrator | Thursday 16 January 2025 14:58:02 +0000 (0:00:00.085) 0:00:24.681 ****** 2025-01-16 14:58:02.946500 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:02.946634 | orchestrator | 2025-01-16 14:58:02.946809 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-01-16 14:58:02.946822 | orchestrator | Thursday 16 January 2025 14:58:02 +0000 (0:00:00.085) 0:00:24.766 ****** 2025-01-16 14:58:03.036469 | orchestrator | ok: [testbed-node-4] => { 2025-01-16 14:58:03.036597 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-01-16 14:58:03.036612 | orchestrator | } 2025-01-16 14:58:03.036975 | orchestrator | 2025-01-16 14:58:03.037022 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-01-16 14:58:03.037189 | orchestrator | Thursday 16 January 2025 14:58:03 +0000 (0:00:00.089) 0:00:24.856 ****** 2025-01-16 14:58:03.124245 | orchestrator | ok: [testbed-node-4] => { 2025-01-16 14:58:03.124372 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-01-16 14:58:03.124384 | orchestrator | } 2025-01-16 14:58:03.124395 | orchestrator | 2025-01-16 14:58:03.124891 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-01-16 14:58:03.209650 | orchestrator | Thursday 16 January 2025 14:58:03 +0000 (0:00:00.087) 0:00:24.944 ****** 2025-01-16 14:58:03.209790 | orchestrator | ok: [testbed-node-4] => { 2025-01-16 14:58:03.209943 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-01-16 14:58:03.209967 | orchestrator | } 2025-01-16 14:58:03.209982 | orchestrator | 2025-01-16 14:58:03.210004 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-01-16 14:58:03.480345 | orchestrator | Thursday 16 January 2025 14:58:03 +0000 (0:00:00.085) 0:00:25.029 ****** 2025-01-16 14:58:03.480518 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:03.480593 | orchestrator | 2025-01-16 14:58:03.480611 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-01-16 14:58:03.480661 | orchestrator | Thursday 16 January 2025 14:58:03 +0000 (0:00:00.270) 0:00:25.299 ****** 2025-01-16 14:58:03.750633 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:03.750837 | orchestrator | 2025-01-16 14:58:03.750909 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-01-16 14:58:03.750942 | orchestrator | Thursday 16 January 2025 14:58:03 +0000 (0:00:00.270) 0:00:25.570 ****** 2025-01-16 14:58:04.034777 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:04.035035 | orchestrator | 2025-01-16 14:58:04.035076 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-01-16 14:58:04.124208 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.283) 0:00:25.854 ****** 2025-01-16 14:58:04.124328 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:04.190291 | orchestrator | 2025-01-16 14:58:04.190418 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-01-16 14:58:04.190438 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.089) 0:00:25.943 ****** 2025-01-16 14:58:04.190471 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:04.369363 | orchestrator | 2025-01-16 14:58:04.369521 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-01-16 14:58:04.369558 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.066) 0:00:26.010 ****** 2025-01-16 14:58:04.369605 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:04.369713 | orchestrator | 2025-01-16 14:58:04.369743 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-01-16 14:58:04.369776 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.179) 0:00:26.189 ****** 2025-01-16 14:58:04.460848 | orchestrator | ok: [testbed-node-4] => { 2025-01-16 14:58:04.461095 | orchestrator |  "vgs_report": { 2025-01-16 14:58:04.461141 | orchestrator |  "vg": [] 2025-01-16 14:58:04.461152 | orchestrator |  } 2025-01-16 14:58:04.461169 | orchestrator | } 2025-01-16 14:58:04.461234 | orchestrator | 2025-01-16 14:58:04.461431 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-01-16 14:58:04.461555 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.090) 0:00:26.279 ****** 2025-01-16 14:58:04.545917 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:04.626573 | orchestrator | 2025-01-16 14:58:04.626701 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-01-16 14:58:04.626785 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.084) 0:00:26.364 ****** 2025-01-16 14:58:04.626829 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:04.626995 | orchestrator | 2025-01-16 14:58:04.627021 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-01-16 14:58:04.627046 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.081) 0:00:26.446 ****** 2025-01-16 14:58:04.709843 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:04.794189 | orchestrator | 2025-01-16 14:58:04.794317 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-01-16 14:58:04.794347 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.082) 0:00:26.528 ****** 2025-01-16 14:58:04.794387 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:04.880140 | orchestrator | 2025-01-16 14:58:04.880238 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-01-16 14:58:04.880250 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.084) 0:00:26.613 ****** 2025-01-16 14:58:04.880273 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:04.962575 | orchestrator | 2025-01-16 14:58:04.962660 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-01-16 14:58:04.962669 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.086) 0:00:26.700 ****** 2025-01-16 14:58:04.962687 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.049489 | orchestrator | 2025-01-16 14:58:05.049611 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-01-16 14:58:05.049632 | orchestrator | Thursday 16 January 2025 14:58:04 +0000 (0:00:00.082) 0:00:26.782 ****** 2025-01-16 14:58:05.049681 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.049753 | orchestrator | 2025-01-16 14:58:05.049775 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-01-16 14:58:05.131670 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.085) 0:00:26.868 ****** 2025-01-16 14:58:05.131780 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.212448 | orchestrator | 2025-01-16 14:58:05.212535 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-01-16 14:58:05.212544 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.083) 0:00:26.951 ****** 2025-01-16 14:58:05.212561 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.295980 | orchestrator | 2025-01-16 14:58:05.296070 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-01-16 14:58:05.296081 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.080) 0:00:27.032 ****** 2025-01-16 14:58:05.296103 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.296158 | orchestrator | 2025-01-16 14:58:05.296169 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-01-16 14:58:05.296179 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.083) 0:00:27.115 ****** 2025-01-16 14:58:05.479369 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.479596 | orchestrator | 2025-01-16 14:58:05.562876 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-01-16 14:58:05.562992 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.182) 0:00:27.298 ****** 2025-01-16 14:58:05.563018 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.563074 | orchestrator | 2025-01-16 14:58:05.563089 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-01-16 14:58:05.563158 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.084) 0:00:27.382 ****** 2025-01-16 14:58:05.645261 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.645375 | orchestrator | 2025-01-16 14:58:05.645392 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-01-16 14:58:05.645535 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.082) 0:00:27.465 ****** 2025-01-16 14:58:05.728940 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.830183 | orchestrator | 2025-01-16 14:58:05.830324 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-01-16 14:58:05.830345 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.083) 0:00:27.548 ****** 2025-01-16 14:58:05.830380 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:05.933523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:05.933691 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.933721 | orchestrator | 2025-01-16 14:58:05.933745 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-01-16 14:58:05.933768 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.101) 0:00:27.649 ****** 2025-01-16 14:58:05.933810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:05.934180 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:05.934216 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:05.934248 | orchestrator | 2025-01-16 14:58:06.042663 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-01-16 14:58:06.042785 | orchestrator | Thursday 16 January 2025 14:58:05 +0000 (0:00:00.102) 0:00:27.752 ****** 2025-01-16 14:58:06.042838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:06.043049 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:06.043291 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:06.043493 | orchestrator | 2025-01-16 14:58:06.043977 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-01-16 14:58:06.145839 | orchestrator | Thursday 16 January 2025 14:58:06 +0000 (0:00:00.110) 0:00:27.862 ****** 2025-01-16 14:58:06.146135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:06.146265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:06.146316 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:06.146498 | orchestrator | 2025-01-16 14:58:06.146832 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-01-16 14:58:06.147104 | orchestrator | Thursday 16 January 2025 14:58:06 +0000 (0:00:00.102) 0:00:27.965 ****** 2025-01-16 14:58:06.250375 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:06.250550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:06.250577 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:06.250682 | orchestrator | 2025-01-16 14:58:06.251247 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-01-16 14:58:06.347248 | orchestrator | Thursday 16 January 2025 14:58:06 +0000 (0:00:00.103) 0:00:28.069 ****** 2025-01-16 14:58:06.347396 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:06.347550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:06.347578 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:06.347605 | orchestrator | 2025-01-16 14:58:06.348079 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-01-16 14:58:06.348114 | orchestrator | Thursday 16 January 2025 14:58:06 +0000 (0:00:00.097) 0:00:28.166 ****** 2025-01-16 14:58:06.450397 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:06.450568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:06.450586 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:06.450602 | orchestrator | 2025-01-16 14:58:06.450619 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-01-16 14:58:06.549340 | orchestrator | Thursday 16 January 2025 14:58:06 +0000 (0:00:00.102) 0:00:28.269 ****** 2025-01-16 14:58:06.549463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:06.549541 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:06.549551 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:06.549564 | orchestrator | 2025-01-16 14:58:06.549727 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-01-16 14:58:06.549743 | orchestrator | Thursday 16 January 2025 14:58:06 +0000 (0:00:00.099) 0:00:28.368 ****** 2025-01-16 14:58:06.928726 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:07.228012 | orchestrator | 2025-01-16 14:58:07.228127 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-01-16 14:58:07.228143 | orchestrator | Thursday 16 January 2025 14:58:06 +0000 (0:00:00.379) 0:00:28.747 ****** 2025-01-16 14:58:07.228170 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:07.314078 | orchestrator | 2025-01-16 14:58:07.314178 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-01-16 14:58:07.314198 | orchestrator | Thursday 16 January 2025 14:58:07 +0000 (0:00:00.299) 0:00:29.047 ****** 2025-01-16 14:58:07.314227 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:07.314244 | orchestrator | 2025-01-16 14:58:07.314259 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-01-16 14:58:07.314277 | orchestrator | Thursday 16 January 2025 14:58:07 +0000 (0:00:00.086) 0:00:29.134 ****** 2025-01-16 14:58:07.425303 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'vg_name': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'}) 2025-01-16 14:58:07.425431 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'vg_name': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'}) 2025-01-16 14:58:07.425452 | orchestrator | 2025-01-16 14:58:07.425467 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-01-16 14:58:07.425488 | orchestrator | Thursday 16 January 2025 14:58:07 +0000 (0:00:00.111) 0:00:29.245 ****** 2025-01-16 14:58:07.529083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:07.529266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:07.529290 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:07.529306 | orchestrator | 2025-01-16 14:58:07.529327 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-01-16 14:58:07.627825 | orchestrator | Thursday 16 January 2025 14:58:07 +0000 (0:00:00.103) 0:00:29.348 ****** 2025-01-16 14:58:07.628047 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:07.628163 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:07.628180 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:07.628195 | orchestrator | 2025-01-16 14:58:07.628250 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-01-16 14:58:07.628268 | orchestrator | Thursday 16 January 2025 14:58:07 +0000 (0:00:00.099) 0:00:29.447 ****** 2025-01-16 14:58:07.727802 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'})  2025-01-16 14:58:07.727999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'})  2025-01-16 14:58:07.728016 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:07.728027 | orchestrator | 2025-01-16 14:58:07.728042 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-01-16 14:58:07.728205 | orchestrator | Thursday 16 January 2025 14:58:07 +0000 (0:00:00.100) 0:00:29.547 ****** 2025-01-16 14:58:08.273037 | orchestrator | ok: [testbed-node-4] => { 2025-01-16 14:58:08.273210 | orchestrator |  "lvm_report": { 2025-01-16 14:58:08.273234 | orchestrator |  "lv": [ 2025-01-16 14:58:08.273251 | orchestrator |  { 2025-01-16 14:58:08.273614 | orchestrator |  "lv_name": "osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd", 2025-01-16 14:58:08.273740 | orchestrator |  "vg_name": "ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd" 2025-01-16 14:58:08.274361 | orchestrator |  }, 2025-01-16 14:58:08.274483 | orchestrator |  { 2025-01-16 14:58:08.274508 | orchestrator |  "lv_name": "osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02", 2025-01-16 14:58:08.274791 | orchestrator |  "vg_name": "ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02" 2025-01-16 14:58:08.275028 | orchestrator |  } 2025-01-16 14:58:08.275296 | orchestrator |  ], 2025-01-16 14:58:08.275528 | orchestrator |  "pv": [ 2025-01-16 14:58:08.275735 | orchestrator |  { 2025-01-16 14:58:08.276030 | orchestrator |  "pv_name": "/dev/sdb", 2025-01-16 14:58:08.276199 | orchestrator |  "vg_name": "ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02" 2025-01-16 14:58:08.276469 | orchestrator |  }, 2025-01-16 14:58:08.276586 | orchestrator |  { 2025-01-16 14:58:08.276777 | orchestrator |  "pv_name": "/dev/sdc", 2025-01-16 14:58:08.277105 | orchestrator |  "vg_name": "ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd" 2025-01-16 14:58:08.277354 | orchestrator |  } 2025-01-16 14:58:08.277533 | orchestrator |  ] 2025-01-16 14:58:08.277827 | orchestrator |  } 2025-01-16 14:58:08.277976 | orchestrator | } 2025-01-16 14:58:08.278179 | orchestrator | 2025-01-16 14:58:08.278386 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-01-16 14:58:08.278691 | orchestrator | 2025-01-16 14:58:08.278766 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-01-16 14:58:08.278974 | orchestrator | Thursday 16 January 2025 14:58:08 +0000 (0:00:00.545) 0:00:30.093 ****** 2025-01-16 14:58:08.436320 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-01-16 14:58:08.436555 | orchestrator | 2025-01-16 14:58:08.436588 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-01-16 14:58:08.587694 | orchestrator | Thursday 16 January 2025 14:58:08 +0000 (0:00:00.162) 0:00:30.255 ****** 2025-01-16 14:58:08.587915 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:08.588054 | orchestrator | 2025-01-16 14:58:08.588089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:08.588123 | orchestrator | Thursday 16 January 2025 14:58:08 +0000 (0:00:00.148) 0:00:30.404 ****** 2025-01-16 14:58:08.883050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-01-16 14:58:08.883257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-01-16 14:58:08.883325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-01-16 14:58:08.883756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-01-16 14:58:08.884058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-01-16 14:58:08.884368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-01-16 14:58:08.886405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-01-16 14:58:08.886589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-01-16 14:58:08.886612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-01-16 14:58:08.886633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-01-16 14:58:08.886829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-01-16 14:58:08.886920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-01-16 14:58:08.886981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-01-16 14:58:08.887001 | orchestrator | 2025-01-16 14:58:08.887093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:08.887219 | orchestrator | Thursday 16 January 2025 14:58:08 +0000 (0:00:00.298) 0:00:30.702 ****** 2025-01-16 14:58:09.005750 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:09.128629 | orchestrator | 2025-01-16 14:58:09.128765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:09.128794 | orchestrator | Thursday 16 January 2025 14:58:08 +0000 (0:00:00.121) 0:00:30.824 ****** 2025-01-16 14:58:09.128835 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:09.248778 | orchestrator | 2025-01-16 14:58:09.248967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:09.248990 | orchestrator | Thursday 16 January 2025 14:58:09 +0000 (0:00:00.123) 0:00:30.948 ****** 2025-01-16 14:58:09.249026 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:09.550729 | orchestrator | 2025-01-16 14:58:09.550815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:09.550823 | orchestrator | Thursday 16 January 2025 14:58:09 +0000 (0:00:00.119) 0:00:31.068 ****** 2025-01-16 14:58:09.550840 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:09.671652 | orchestrator | 2025-01-16 14:58:09.671740 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:09.671750 | orchestrator | Thursday 16 January 2025 14:58:09 +0000 (0:00:00.301) 0:00:31.370 ****** 2025-01-16 14:58:09.671769 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:09.671804 | orchestrator | 2025-01-16 14:58:09.671812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:09.671821 | orchestrator | Thursday 16 January 2025 14:58:09 +0000 (0:00:00.121) 0:00:31.491 ****** 2025-01-16 14:58:09.793188 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:09.793667 | orchestrator | 2025-01-16 14:58:09.917358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:09.917484 | orchestrator | Thursday 16 January 2025 14:58:09 +0000 (0:00:00.121) 0:00:31.613 ****** 2025-01-16 14:58:09.917521 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:10.042414 | orchestrator | 2025-01-16 14:58:10.042530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:10.042545 | orchestrator | Thursday 16 January 2025 14:58:09 +0000 (0:00:00.123) 0:00:31.737 ****** 2025-01-16 14:58:10.042568 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:10.042643 | orchestrator | 2025-01-16 14:58:10.042655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:10.042667 | orchestrator | Thursday 16 January 2025 14:58:10 +0000 (0:00:00.124) 0:00:31.862 ****** 2025-01-16 14:58:10.322321 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e) 2025-01-16 14:58:10.322562 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e) 2025-01-16 14:58:10.322595 | orchestrator | 2025-01-16 14:58:10.322668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:10.322749 | orchestrator | Thursday 16 January 2025 14:58:10 +0000 (0:00:00.279) 0:00:32.141 ****** 2025-01-16 14:58:10.616669 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0aac5059-2a3a-4141-840f-fb09a7465e72) 2025-01-16 14:58:10.616782 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0aac5059-2a3a-4141-840f-fb09a7465e72) 2025-01-16 14:58:10.616797 | orchestrator | 2025-01-16 14:58:10.616976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:10.617195 | orchestrator | Thursday 16 January 2025 14:58:10 +0000 (0:00:00.293) 0:00:32.435 ****** 2025-01-16 14:58:10.909911 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_97685de2-31d7-40a6-8026-91294c9f6af1) 2025-01-16 14:58:11.197970 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_97685de2-31d7-40a6-8026-91294c9f6af1) 2025-01-16 14:58:11.198266 | orchestrator | 2025-01-16 14:58:11.198288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:11.198299 | orchestrator | Thursday 16 January 2025 14:58:10 +0000 (0:00:00.292) 0:00:32.728 ****** 2025-01-16 14:58:11.198345 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d740be6b-1b5d-4ad1-85aa-7275c0983c2d) 2025-01-16 14:58:11.411613 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d740be6b-1b5d-4ad1-85aa-7275c0983c2d) 2025-01-16 14:58:11.411718 | orchestrator | 2025-01-16 14:58:11.411735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-01-16 14:58:11.411749 | orchestrator | Thursday 16 January 2025 14:58:11 +0000 (0:00:00.288) 0:00:33.017 ****** 2025-01-16 14:58:11.411775 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-01-16 14:58:11.411835 | orchestrator | 2025-01-16 14:58:11.411850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:11.411901 | orchestrator | Thursday 16 January 2025 14:58:11 +0000 (0:00:00.213) 0:00:33.231 ****** 2025-01-16 14:58:11.929169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-01-16 14:58:11.929301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-01-16 14:58:11.929323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-01-16 14:58:11.929621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-01-16 14:58:11.929691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-01-16 14:58:11.929925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-01-16 14:58:11.931956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-01-16 14:58:12.076937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-01-16 14:58:12.077039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-01-16 14:58:12.077055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-01-16 14:58:12.077070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-01-16 14:58:12.077086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-01-16 14:58:12.077101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-01-16 14:58:12.077115 | orchestrator | 2025-01-16 14:58:12.077131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:12.077146 | orchestrator | Thursday 16 January 2025 14:58:11 +0000 (0:00:00.516) 0:00:33.748 ****** 2025-01-16 14:58:12.077175 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:12.078138 | orchestrator | 2025-01-16 14:58:12.078175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:12.078197 | orchestrator | Thursday 16 January 2025 14:58:12 +0000 (0:00:00.146) 0:00:33.894 ****** 2025-01-16 14:58:12.206964 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:12.208199 | orchestrator | 2025-01-16 14:58:12.208267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:12.208284 | orchestrator | Thursday 16 January 2025 14:58:12 +0000 (0:00:00.132) 0:00:34.026 ****** 2025-01-16 14:58:12.337326 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:12.338405 | orchestrator | 2025-01-16 14:58:12.338556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:12.464514 | orchestrator | Thursday 16 January 2025 14:58:12 +0000 (0:00:00.129) 0:00:34.156 ****** 2025-01-16 14:58:12.464624 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:12.594210 | orchestrator | 2025-01-16 14:58:12.594345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:12.594367 | orchestrator | Thursday 16 January 2025 14:58:12 +0000 (0:00:00.128) 0:00:34.284 ****** 2025-01-16 14:58:12.594402 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:12.726668 | orchestrator | 2025-01-16 14:58:12.726789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:12.726809 | orchestrator | Thursday 16 January 2025 14:58:12 +0000 (0:00:00.128) 0:00:34.413 ****** 2025-01-16 14:58:12.726840 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:12.858819 | orchestrator | 2025-01-16 14:58:12.859059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:12.859091 | orchestrator | Thursday 16 January 2025 14:58:12 +0000 (0:00:00.132) 0:00:34.546 ****** 2025-01-16 14:58:12.859133 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:12.979598 | orchestrator | 2025-01-16 14:58:12.979709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:12.979724 | orchestrator | Thursday 16 January 2025 14:58:12 +0000 (0:00:00.131) 0:00:34.677 ****** 2025-01-16 14:58:12.979762 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:13.532832 | orchestrator | 2025-01-16 14:58:13.532951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:13.532965 | orchestrator | Thursday 16 January 2025 14:58:12 +0000 (0:00:00.121) 0:00:34.799 ****** 2025-01-16 14:58:13.532983 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-01-16 14:58:13.534129 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-01-16 14:58:13.534189 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-01-16 14:58:13.534203 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-01-16 14:58:13.883345 | orchestrator | 2025-01-16 14:58:13.883461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:13.883483 | orchestrator | Thursday 16 January 2025 14:58:13 +0000 (0:00:00.552) 0:00:35.352 ****** 2025-01-16 14:58:13.883518 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:14.007346 | orchestrator | 2025-01-16 14:58:14.007506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:14.007527 | orchestrator | Thursday 16 January 2025 14:58:13 +0000 (0:00:00.349) 0:00:35.701 ****** 2025-01-16 14:58:14.007554 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:14.007613 | orchestrator | 2025-01-16 14:58:14.007632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:14.139499 | orchestrator | Thursday 16 January 2025 14:58:14 +0000 (0:00:00.125) 0:00:35.827 ****** 2025-01-16 14:58:14.139610 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:14.267711 | orchestrator | 2025-01-16 14:58:14.267854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-01-16 14:58:14.267910 | orchestrator | Thursday 16 January 2025 14:58:14 +0000 (0:00:00.131) 0:00:35.959 ****** 2025-01-16 14:58:14.267934 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:14.267993 | orchestrator | 2025-01-16 14:58:14.268079 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-01-16 14:58:14.268288 | orchestrator | Thursday 16 January 2025 14:58:14 +0000 (0:00:00.128) 0:00:36.087 ****** 2025-01-16 14:58:14.352516 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:14.484049 | orchestrator | 2025-01-16 14:58:14.484174 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-01-16 14:58:14.484194 | orchestrator | Thursday 16 January 2025 14:58:14 +0000 (0:00:00.084) 0:00:36.172 ****** 2025-01-16 14:58:14.484255 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '53007ac5-07c2-53cd-add6-e57729925218'}}) 2025-01-16 14:58:15.498281 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54c8019f-0033-5b40-9c4f-7f2e43f78b89'}}) 2025-01-16 14:58:15.498412 | orchestrator | 2025-01-16 14:58:15.498436 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-01-16 14:58:15.498453 | orchestrator | Thursday 16 January 2025 14:58:14 +0000 (0:00:00.131) 0:00:36.303 ****** 2025-01-16 14:58:15.498486 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'}) 2025-01-16 14:58:15.498585 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'}) 2025-01-16 14:58:15.498603 | orchestrator | 2025-01-16 14:58:15.498618 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-01-16 14:58:15.498637 | orchestrator | Thursday 16 January 2025 14:58:15 +0000 (0:00:01.013) 0:00:37.316 ****** 2025-01-16 14:58:15.603638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:15.603769 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:15.603781 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:15.605177 | orchestrator | 2025-01-16 14:58:15.605291 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-01-16 14:58:16.272352 | orchestrator | Thursday 16 January 2025 14:58:15 +0000 (0:00:00.103) 0:00:37.420 ****** 2025-01-16 14:58:16.272493 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'}) 2025-01-16 14:58:16.369309 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'}) 2025-01-16 14:58:16.369391 | orchestrator | 2025-01-16 14:58:16.369420 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-01-16 14:58:16.369427 | orchestrator | Thursday 16 January 2025 14:58:16 +0000 (0:00:00.670) 0:00:38.091 ****** 2025-01-16 14:58:16.369444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:16.369472 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:16.369479 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:16.369486 | orchestrator | 2025-01-16 14:58:16.369494 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-01-16 14:58:16.369623 | orchestrator | Thursday 16 January 2025 14:58:16 +0000 (0:00:00.097) 0:00:38.188 ****** 2025-01-16 14:58:16.547281 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:16.647025 | orchestrator | 2025-01-16 14:58:16.647147 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-01-16 14:58:16.647167 | orchestrator | Thursday 16 January 2025 14:58:16 +0000 (0:00:00.178) 0:00:38.366 ****** 2025-01-16 14:58:16.647198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:16.648197 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:16.648258 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:16.648282 | orchestrator | 2025-01-16 14:58:16.648334 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-01-16 14:58:16.648539 | orchestrator | Thursday 16 January 2025 14:58:16 +0000 (0:00:00.098) 0:00:38.465 ****** 2025-01-16 14:58:16.730293 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:16.825451 | orchestrator | 2025-01-16 14:58:16.825604 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-01-16 14:58:16.825627 | orchestrator | Thursday 16 January 2025 14:58:16 +0000 (0:00:00.084) 0:00:38.549 ****** 2025-01-16 14:58:16.825662 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:16.825755 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:16.825804 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:16.825826 | orchestrator | 2025-01-16 14:58:16.826121 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-01-16 14:58:16.826246 | orchestrator | Thursday 16 January 2025 14:58:16 +0000 (0:00:00.095) 0:00:38.645 ****** 2025-01-16 14:58:16.908690 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:17.004720 | orchestrator | 2025-01-16 14:58:17.004812 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-01-16 14:58:17.004820 | orchestrator | Thursday 16 January 2025 14:58:16 +0000 (0:00:00.082) 0:00:38.727 ****** 2025-01-16 14:58:17.004839 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:17.005667 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:17.006409 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:17.006461 | orchestrator | 2025-01-16 14:58:17.006527 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-01-16 14:58:17.006679 | orchestrator | Thursday 16 January 2025 14:58:16 +0000 (0:00:00.096) 0:00:38.824 ****** 2025-01-16 14:58:17.088783 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:17.187010 | orchestrator | 2025-01-16 14:58:17.187109 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-01-16 14:58:17.187125 | orchestrator | Thursday 16 January 2025 14:58:17 +0000 (0:00:00.083) 0:00:38.908 ****** 2025-01-16 14:58:17.187150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:17.187227 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:17.187241 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:17.187254 | orchestrator | 2025-01-16 14:58:17.187513 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-01-16 14:58:17.187599 | orchestrator | Thursday 16 January 2025 14:58:17 +0000 (0:00:00.098) 0:00:39.006 ****** 2025-01-16 14:58:17.286768 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:17.287028 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:17.287065 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:17.287441 | orchestrator | 2025-01-16 14:58:17.287680 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-01-16 14:58:17.288056 | orchestrator | Thursday 16 January 2025 14:58:17 +0000 (0:00:00.099) 0:00:39.106 ****** 2025-01-16 14:58:17.386706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:17.387028 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:17.387086 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:17.387113 | orchestrator | 2025-01-16 14:58:17.387191 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-01-16 14:58:17.387216 | orchestrator | Thursday 16 January 2025 14:58:17 +0000 (0:00:00.099) 0:00:39.206 ****** 2025-01-16 14:58:17.469600 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:17.469721 | orchestrator | 2025-01-16 14:58:17.469753 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-01-16 14:58:17.469798 | orchestrator | Thursday 16 January 2025 14:58:17 +0000 (0:00:00.083) 0:00:39.289 ****** 2025-01-16 14:58:17.553604 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:17.553935 | orchestrator | 2025-01-16 14:58:17.553976 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-01-16 14:58:17.741220 | orchestrator | Thursday 16 January 2025 14:58:17 +0000 (0:00:00.084) 0:00:39.373 ****** 2025-01-16 14:58:17.741320 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:17.741381 | orchestrator | 2025-01-16 14:58:17.741398 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-01-16 14:58:17.828259 | orchestrator | Thursday 16 January 2025 14:58:17 +0000 (0:00:00.186) 0:00:39.560 ****** 2025-01-16 14:58:17.828434 | orchestrator | ok: [testbed-node-5] => { 2025-01-16 14:58:17.828503 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-01-16 14:58:17.828521 | orchestrator | } 2025-01-16 14:58:17.828535 | orchestrator | 2025-01-16 14:58:17.828550 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-01-16 14:58:17.828568 | orchestrator | Thursday 16 January 2025 14:58:17 +0000 (0:00:00.087) 0:00:39.648 ****** 2025-01-16 14:58:17.915723 | orchestrator | ok: [testbed-node-5] => { 2025-01-16 14:58:17.915916 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-01-16 14:58:17.916175 | orchestrator | } 2025-01-16 14:58:17.916419 | orchestrator | 2025-01-16 14:58:17.916580 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-01-16 14:58:17.916680 | orchestrator | Thursday 16 January 2025 14:58:17 +0000 (0:00:00.087) 0:00:39.735 ****** 2025-01-16 14:58:18.006749 | orchestrator | ok: [testbed-node-5] => { 2025-01-16 14:58:18.273928 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-01-16 14:58:18.274094 | orchestrator | } 2025-01-16 14:58:18.274117 | orchestrator | 2025-01-16 14:58:18.274133 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-01-16 14:58:18.274149 | orchestrator | Thursday 16 January 2025 14:58:18 +0000 (0:00:00.090) 0:00:39.825 ****** 2025-01-16 14:58:18.274181 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:18.274253 | orchestrator | 2025-01-16 14:58:18.274297 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-01-16 14:58:18.274316 | orchestrator | Thursday 16 January 2025 14:58:18 +0000 (0:00:00.267) 0:00:40.093 ****** 2025-01-16 14:58:18.544932 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:18.812550 | orchestrator | 2025-01-16 14:58:18.812645 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-01-16 14:58:18.812656 | orchestrator | Thursday 16 January 2025 14:58:18 +0000 (0:00:00.270) 0:00:40.364 ****** 2025-01-16 14:58:18.812676 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:18.899842 | orchestrator | 2025-01-16 14:58:18.900050 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-01-16 14:58:18.900076 | orchestrator | Thursday 16 January 2025 14:58:18 +0000 (0:00:00.266) 0:00:40.631 ****** 2025-01-16 14:58:18.900113 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:18.900568 | orchestrator | 2025-01-16 14:58:18.963333 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-01-16 14:58:18.963471 | orchestrator | Thursday 16 January 2025 14:58:18 +0000 (0:00:00.088) 0:00:40.719 ****** 2025-01-16 14:58:18.963498 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:18.963540 | orchestrator | 2025-01-16 14:58:18.963553 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-01-16 14:58:18.963768 | orchestrator | Thursday 16 January 2025 14:58:18 +0000 (0:00:00.063) 0:00:40.783 ****** 2025-01-16 14:58:19.030259 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:19.030442 | orchestrator | 2025-01-16 14:58:19.030464 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-01-16 14:58:19.030484 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.066) 0:00:40.849 ****** 2025-01-16 14:58:19.115348 | orchestrator | ok: [testbed-node-5] => { 2025-01-16 14:58:19.115637 | orchestrator |  "vgs_report": { 2025-01-16 14:58:19.115671 | orchestrator |  "vg": [] 2025-01-16 14:58:19.115714 | orchestrator |  } 2025-01-16 14:58:19.116008 | orchestrator | } 2025-01-16 14:58:19.116051 | orchestrator | 2025-01-16 14:58:19.116314 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-01-16 14:58:19.116609 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.085) 0:00:40.935 ****** 2025-01-16 14:58:19.191182 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:19.191319 | orchestrator | 2025-01-16 14:58:19.191336 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-01-16 14:58:19.369684 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.076) 0:00:41.011 ****** 2025-01-16 14:58:19.370302 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:19.450925 | orchestrator | 2025-01-16 14:58:19.451044 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-01-16 14:58:19.451061 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.177) 0:00:41.189 ****** 2025-01-16 14:58:19.451098 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:19.451180 | orchestrator | 2025-01-16 14:58:19.451198 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-01-16 14:58:19.537646 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.081) 0:00:41.270 ****** 2025-01-16 14:58:19.537763 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:19.538263 | orchestrator | 2025-01-16 14:58:19.538456 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-01-16 14:58:19.538483 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.086) 0:00:41.357 ****** 2025-01-16 14:58:19.627761 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:19.627983 | orchestrator | 2025-01-16 14:58:19.628022 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-01-16 14:58:19.628080 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.089) 0:00:41.447 ****** 2025-01-16 14:58:19.711021 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:19.713164 | orchestrator | 2025-01-16 14:58:19.713235 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-01-16 14:58:19.713265 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.083) 0:00:41.530 ****** 2025-01-16 14:58:19.794775 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:19.794995 | orchestrator | 2025-01-16 14:58:19.795021 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-01-16 14:58:19.795044 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.083) 0:00:41.614 ****** 2025-01-16 14:58:19.876764 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:19.957957 | orchestrator | 2025-01-16 14:58:19.958122 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-01-16 14:58:19.958144 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.081) 0:00:41.696 ****** 2025-01-16 14:58:19.958173 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.042967 | orchestrator | 2025-01-16 14:58:20.043064 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-01-16 14:58:20.043072 | orchestrator | Thursday 16 January 2025 14:58:19 +0000 (0:00:00.081) 0:00:41.777 ****** 2025-01-16 14:58:20.043090 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.043129 | orchestrator | 2025-01-16 14:58:20.043136 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-01-16 14:58:20.043143 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.085) 0:00:41.862 ****** 2025-01-16 14:58:20.126420 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.207410 | orchestrator | 2025-01-16 14:58:20.207563 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-01-16 14:58:20.207595 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.083) 0:00:41.946 ****** 2025-01-16 14:58:20.207640 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.292031 | orchestrator | 2025-01-16 14:58:20.292137 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-01-16 14:58:20.292148 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.081) 0:00:42.027 ****** 2025-01-16 14:58:20.292191 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.292240 | orchestrator | 2025-01-16 14:58:20.292252 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-01-16 14:58:20.292272 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.084) 0:00:42.111 ****** 2025-01-16 14:58:20.477645 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.477792 | orchestrator | 2025-01-16 14:58:20.477808 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-01-16 14:58:20.477821 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.185) 0:00:42.297 ****** 2025-01-16 14:58:20.581578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:20.581715 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:20.581729 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.581741 | orchestrator | 2025-01-16 14:58:20.581852 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-01-16 14:58:20.581935 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.103) 0:00:42.401 ****** 2025-01-16 14:58:20.686136 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:20.686449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:20.686495 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.686733 | orchestrator | 2025-01-16 14:58:20.686777 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-01-16 14:58:20.686803 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.103) 0:00:42.505 ****** 2025-01-16 14:58:20.785821 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:20.786158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:20.786218 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.786239 | orchestrator | 2025-01-16 14:58:20.786301 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-01-16 14:58:20.786318 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.100) 0:00:42.605 ****** 2025-01-16 14:58:20.879810 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:20.980265 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:20.980374 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.980388 | orchestrator | 2025-01-16 14:58:20.980399 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-01-16 14:58:20.980410 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.093) 0:00:42.698 ****** 2025-01-16 14:58:20.980433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:20.980494 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:20.980506 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:20.980530 | orchestrator | 2025-01-16 14:58:20.980574 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-01-16 14:58:20.980588 | orchestrator | Thursday 16 January 2025 14:58:20 +0000 (0:00:00.101) 0:00:42.800 ****** 2025-01-16 14:58:21.082878 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:21.083048 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:21.083074 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:21.083277 | orchestrator | 2025-01-16 14:58:21.083351 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-01-16 14:58:21.083724 | orchestrator | Thursday 16 January 2025 14:58:21 +0000 (0:00:00.102) 0:00:42.902 ****** 2025-01-16 14:58:21.183720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:21.184439 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:21.184550 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:21.184598 | orchestrator | 2025-01-16 14:58:21.184753 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-01-16 14:58:21.184787 | orchestrator | Thursday 16 January 2025 14:58:21 +0000 (0:00:00.101) 0:00:43.003 ****** 2025-01-16 14:58:21.282321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:21.282555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:21.282583 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:21.282607 | orchestrator | 2025-01-16 14:58:21.282838 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-01-16 14:58:21.282894 | orchestrator | Thursday 16 January 2025 14:58:21 +0000 (0:00:00.098) 0:00:43.102 ****** 2025-01-16 14:58:21.550423 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:21.550659 | orchestrator | 2025-01-16 14:58:21.550929 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-01-16 14:58:21.550967 | orchestrator | Thursday 16 January 2025 14:58:21 +0000 (0:00:00.268) 0:00:43.370 ****** 2025-01-16 14:58:21.822644 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:21.822812 | orchestrator | 2025-01-16 14:58:21.822837 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-01-16 14:58:21.822932 | orchestrator | Thursday 16 January 2025 14:58:21 +0000 (0:00:00.271) 0:00:43.641 ****** 2025-01-16 14:58:22.013438 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:22.013945 | orchestrator | 2025-01-16 14:58:22.014135 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-01-16 14:58:22.014181 | orchestrator | Thursday 16 January 2025 14:58:22 +0000 (0:00:00.191) 0:00:43.833 ****** 2025-01-16 14:58:22.126454 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'vg_name': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'}) 2025-01-16 14:58:22.227793 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'vg_name': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'}) 2025-01-16 14:58:22.227938 | orchestrator | 2025-01-16 14:58:22.227957 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-01-16 14:58:22.227972 | orchestrator | Thursday 16 January 2025 14:58:22 +0000 (0:00:00.112) 0:00:43.945 ****** 2025-01-16 14:58:22.228001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:22.228358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:22.228475 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:22.228487 | orchestrator | 2025-01-16 14:58:22.228506 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-01-16 14:58:22.228567 | orchestrator | Thursday 16 January 2025 14:58:22 +0000 (0:00:00.101) 0:00:44.047 ****** 2025-01-16 14:58:22.332483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:22.332688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:22.332736 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:22.332810 | orchestrator | 2025-01-16 14:58:22.333081 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-01-16 14:58:22.333178 | orchestrator | Thursday 16 January 2025 14:58:22 +0000 (0:00:00.104) 0:00:44.152 ****** 2025-01-16 14:58:22.438386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'})  2025-01-16 14:58:22.438555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'})  2025-01-16 14:58:22.438578 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:22.438733 | orchestrator | 2025-01-16 14:58:22.438753 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-01-16 14:58:22.438978 | orchestrator | Thursday 16 January 2025 14:58:22 +0000 (0:00:00.106) 0:00:44.258 ****** 2025-01-16 14:58:22.731405 | orchestrator | ok: [testbed-node-5] => { 2025-01-16 14:58:22.731702 | orchestrator |  "lvm_report": { 2025-01-16 14:58:22.731735 | orchestrator |  "lv": [ 2025-01-16 14:58:22.732139 | orchestrator |  { 2025-01-16 14:58:22.732388 | orchestrator |  "lv_name": "osd-block-53007ac5-07c2-53cd-add6-e57729925218", 2025-01-16 14:58:22.732798 | orchestrator |  "vg_name": "ceph-53007ac5-07c2-53cd-add6-e57729925218" 2025-01-16 14:58:22.733024 | orchestrator |  }, 2025-01-16 14:58:22.733393 | orchestrator |  { 2025-01-16 14:58:22.733641 | orchestrator |  "lv_name": "osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89", 2025-01-16 14:58:22.733970 | orchestrator |  "vg_name": "ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89" 2025-01-16 14:58:22.734386 | orchestrator |  } 2025-01-16 14:58:22.734833 | orchestrator |  ], 2025-01-16 14:58:22.735193 | orchestrator |  "pv": [ 2025-01-16 14:58:22.735282 | orchestrator |  { 2025-01-16 14:58:22.735601 | orchestrator |  "pv_name": "/dev/sdb", 2025-01-16 14:58:22.735969 | orchestrator |  "vg_name": "ceph-53007ac5-07c2-53cd-add6-e57729925218" 2025-01-16 14:58:22.736302 | orchestrator |  }, 2025-01-16 14:58:22.736653 | orchestrator |  { 2025-01-16 14:58:22.737216 | orchestrator |  "pv_name": "/dev/sdc", 2025-01-16 14:58:22.737617 | orchestrator |  "vg_name": "ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89" 2025-01-16 14:58:22.737647 | orchestrator |  } 2025-01-16 14:58:22.737804 | orchestrator |  ] 2025-01-16 14:58:22.738294 | orchestrator |  } 2025-01-16 14:58:22.738449 | orchestrator | } 2025-01-16 14:58:22.738654 | orchestrator | 2025-01-16 14:58:22.739422 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:58:22.739946 | orchestrator | 2025-01-16 14:58:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:58:22.740001 | orchestrator | 2025-01-16 14:58:22 | INFO  | Please wait and do not abort execution. 2025-01-16 14:58:22.740037 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-01-16 14:58:22.740217 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-01-16 14:58:22.740412 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-01-16 14:58:22.740717 | orchestrator | 2025-01-16 14:58:22.741017 | orchestrator | 2025-01-16 14:58:22.741214 | orchestrator | 2025-01-16 14:58:22.741399 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:58:22.741727 | orchestrator | Thursday 16 January 2025 14:58:22 +0000 (0:00:00.292) 0:00:44.550 ****** 2025-01-16 14:58:22.742087 | orchestrator | =============================================================================== 2025-01-16 14:58:22.742295 | orchestrator | Create block VGs -------------------------------------------------------- 3.25s 2025-01-16 14:58:22.742718 | orchestrator | Create block LVs -------------------------------------------------------- 2.11s 2025-01-16 14:58:22.742959 | orchestrator | Print LVM report data --------------------------------------------------- 1.38s 2025-01-16 14:58:22.742987 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-01-16 14:58:22.743349 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2025-01-16 14:58:22.745073 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 0.94s 2025-01-16 14:58:22.745706 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 0.94s 2025-01-16 14:58:22.745744 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 0.91s 2025-01-16 14:58:22.745753 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 0.85s 2025-01-16 14:58:22.745763 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 0.83s 2025-01-16 14:58:22.745771 | orchestrator | Add known partitions to the list of available block devices ------------- 0.55s 2025-01-16 14:58:22.745786 | orchestrator | Add known links to the list of available block devices ------------------ 0.48s 2025-01-16 14:58:22.745951 | orchestrator | Add known links to the list of available block devices ------------------ 0.48s 2025-01-16 14:58:22.746219 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.47s 2025-01-16 14:58:22.746476 | orchestrator | Add known partitions to the list of available block devices ------------- 0.45s 2025-01-16 14:58:22.746776 | orchestrator | Add known partitions to the list of available block devices ------------- 0.44s 2025-01-16 14:58:22.747012 | orchestrator | Get initial list of available block devices ----------------------------- 0.43s 2025-01-16 14:58:22.747146 | orchestrator | Add known links to the list of available block devices ------------------ 0.41s 2025-01-16 14:58:22.747324 | orchestrator | Fail if number of OSDs exceeds num_osds for a DB VG --------------------- 0.41s 2025-01-16 14:58:22.747463 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.41s 2025-01-16 14:58:23.965785 | orchestrator | 2025-01-16 14:58:23 | INFO  | Task 99f8caab-cd50-4ce7-84bd-27bbc0be9776 (facts) was prepared for execution. 2025-01-16 14:58:27.724690 | orchestrator | 2025-01-16 14:58:23 | INFO  | It takes a moment until task 99f8caab-cd50-4ce7-84bd-27bbc0be9776 (facts) has been started and output is visible here. 2025-01-16 14:58:27.724834 | orchestrator | 2025-01-16 14:58:30.338912 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-01-16 14:58:30.339064 | orchestrator | 2025-01-16 14:58:30.339085 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-01-16 14:58:30.339115 | orchestrator | Thursday 16 January 2025 14:58:27 +0000 (0:00:01.549) 0:00:01.549 ****** 2025-01-16 14:58:30.339884 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:58:30.340290 | orchestrator | ok: [testbed-manager] 2025-01-16 14:58:30.340364 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:58:30.340391 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:58:30.340487 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:58:30.340652 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:30.341142 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:30.341449 | orchestrator | 2025-01-16 14:58:30.341491 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-01-16 14:58:30.341652 | orchestrator | Thursday 16 January 2025 14:58:30 +0000 (0:00:02.612) 0:00:04.162 ****** 2025-01-16 14:58:30.453533 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:58:30.513250 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:58:30.572315 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:58:30.634530 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:58:30.693485 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:58:31.926324 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:31.926492 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:31.926537 | orchestrator | 2025-01-16 14:58:31.926552 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-01-16 14:58:31.926567 | orchestrator | 2025-01-16 14:58:31.926580 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-01-16 14:58:31.926599 | orchestrator | Thursday 16 January 2025 14:58:31 +0000 (0:00:01.590) 0:00:05.752 ****** 2025-01-16 14:58:35.690393 | orchestrator | ok: [testbed-node-1] 2025-01-16 14:58:35.690604 | orchestrator | ok: [testbed-node-2] 2025-01-16 14:58:35.690624 | orchestrator | ok: [testbed-node-0] 2025-01-16 14:58:35.690635 | orchestrator | ok: [testbed-manager] 2025-01-16 14:58:35.690891 | orchestrator | ok: [testbed-node-3] 2025-01-16 14:58:35.691089 | orchestrator | ok: [testbed-node-4] 2025-01-16 14:58:35.691247 | orchestrator | ok: [testbed-node-5] 2025-01-16 14:58:35.691472 | orchestrator | 2025-01-16 14:58:35.694130 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-01-16 14:58:35.800123 | orchestrator | 2025-01-16 14:58:35.800226 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-01-16 14:58:35.800240 | orchestrator | Thursday 16 January 2025 14:58:35 +0000 (0:00:03.765) 0:00:09.517 ****** 2025-01-16 14:58:35.800265 | orchestrator | skipping: [testbed-manager] 2025-01-16 14:58:35.856028 | orchestrator | skipping: [testbed-node-0] 2025-01-16 14:58:35.920296 | orchestrator | skipping: [testbed-node-1] 2025-01-16 14:58:36.012568 | orchestrator | skipping: [testbed-node-2] 2025-01-16 14:58:36.091703 | orchestrator | skipping: [testbed-node-3] 2025-01-16 14:58:37.409162 | orchestrator | skipping: [testbed-node-4] 2025-01-16 14:58:37.409311 | orchestrator | skipping: [testbed-node-5] 2025-01-16 14:58:37.409333 | orchestrator | 2025-01-16 14:58:37.409349 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:58:37.409365 | orchestrator | 2025-01-16 14:58:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 14:58:37.409381 | orchestrator | 2025-01-16 14:58:37 | INFO  | Please wait and do not abort execution. 2025-01-16 14:58:37.409404 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:58:37.409680 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:58:37.409987 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:58:37.410242 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:58:37.410482 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:58:37.410931 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:58:37.410967 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 14:58:37.411222 | orchestrator | 2025-01-16 14:58:37.411472 | orchestrator | 2025-01-16 14:58:37.411730 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:58:37.411983 | orchestrator | Thursday 16 January 2025 14:58:37 +0000 (0:00:01.717) 0:00:11.235 ****** 2025-01-16 14:58:37.412253 | orchestrator | =============================================================================== 2025-01-16 14:58:37.412478 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.77s 2025-01-16 14:58:37.412848 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.61s 2025-01-16 14:58:37.413097 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.72s 2025-01-16 14:58:37.413477 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.59s 2025-01-16 14:58:37.716580 | orchestrator | 2025-01-16 14:58:37.718006 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Jan 16 14:58:37 UTC 2025 2025-01-16 14:58:38.674245 | orchestrator | 2025-01-16 14:58:38.674340 | orchestrator | 2025-01-16 14:58:38 | INFO  | Collection nutshell is prepared for execution 2025-01-16 14:58:38.677327 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [0] - dotfiles 2025-01-16 14:58:38.677379 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [0] - homer 2025-01-16 14:58:38.678440 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [0] - netdata 2025-01-16 14:58:38.678487 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [0] - openstackclient 2025-01-16 14:58:38.678506 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [0] - phpmyadmin 2025-01-16 14:58:38.678511 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [0] - common 2025-01-16 14:58:38.678523 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [1] -- loadbalancer 2025-01-16 14:58:38.678561 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [2] --- opensearch 2025-01-16 14:58:38.678568 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [2] --- mariadb-ng 2025-01-16 14:58:38.678575 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [3] ---- horizon 2025-01-16 14:58:38.678612 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [3] ---- keystone 2025-01-16 14:58:38.678629 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [4] ----- neutron 2025-01-16 14:58:38.678739 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [5] ------ wait-for-nova 2025-01-16 14:58:38.678845 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [5] ------ octavia 2025-01-16 14:58:38.679446 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [4] ----- barbican 2025-01-16 14:58:38.679716 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [4] ----- designate 2025-01-16 14:58:38.679742 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [4] ----- ironic 2025-01-16 14:58:38.679751 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [4] ----- placement 2025-01-16 14:58:38.679759 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [4] ----- magnum 2025-01-16 14:58:38.679772 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [1] -- openvswitch 2025-01-16 14:58:38.679945 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [2] --- ovn 2025-01-16 14:58:38.679964 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [1] -- memcached 2025-01-16 14:58:38.680043 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [1] -- redis 2025-01-16 14:58:38.680058 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [1] -- rabbitmq-ng 2025-01-16 14:58:38.680172 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [0] - kubernetes 2025-01-16 14:58:38.680263 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [1] -- kubeconfig 2025-01-16 14:58:38.680314 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [1] -- copy-kubeconfig 2025-01-16 14:58:38.680469 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [0] - ceph 2025-01-16 14:58:38.681585 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [1] -- ceph-pools 2025-01-16 14:58:38.683206 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [2] --- copy-ceph-keys 2025-01-16 14:58:38.683246 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [3] ---- cephclient 2025-01-16 14:58:38.683261 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-01-16 14:58:38.765427 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [4] ----- wait-for-keystone 2025-01-16 14:58:38.765546 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [5] ------ kolla-ceph-rgw 2025-01-16 14:58:38.765564 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [5] ------ glance 2025-01-16 14:58:38.765580 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [5] ------ cinder 2025-01-16 14:58:38.765594 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [5] ------ nova 2025-01-16 14:58:38.765609 | orchestrator | 2025-01-16 14:58:38 | INFO  | A [4] ----- prometheus 2025-01-16 14:58:38.765624 | orchestrator | 2025-01-16 14:58:38 | INFO  | D [5] ------ grafana 2025-01-16 14:58:38.765657 | orchestrator | 2025-01-16 14:58:38 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-01-16 14:58:40.134000 | orchestrator | 2025-01-16 14:58:38 | INFO  | Tasks are running in the background 2025-01-16 14:58:40.134229 | orchestrator | 2025-01-16 14:58:40 | INFO  | No task IDs specified, wait for all currently running tasks 2025-01-16 14:58:42.196380 | orchestrator | 2025-01-16 14:58:42 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:58:42.197014 | orchestrator | 2025-01-16 14:58:42 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:58:42.197076 | orchestrator | 2025-01-16 14:58:42 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:58:42.197092 | orchestrator | 2025-01-16 14:58:42 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:58:42.197493 | orchestrator | 2025-01-16 14:58:42 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:58:42.198118 | orchestrator | 2025-01-16 14:58:42 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:58:45.243256 | orchestrator | 2025-01-16 14:58:42 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:58:45.243355 | orchestrator | 2025-01-16 14:58:45 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:58:45.243501 | orchestrator | 2025-01-16 14:58:45 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:58:45.243514 | orchestrator | 2025-01-16 14:58:45 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:58:45.244000 | orchestrator | 2025-01-16 14:58:45 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:58:45.246241 | orchestrator | 2025-01-16 14:58:45 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:58:45.246464 | orchestrator | 2025-01-16 14:58:45 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:58:45.246544 | orchestrator | 2025-01-16 14:58:45 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:58:48.281131 | orchestrator | 2025-01-16 14:58:48 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:58:51.321271 | orchestrator | 2025-01-16 14:58:48 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:58:51.321396 | orchestrator | 2025-01-16 14:58:48 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:58:51.321429 | orchestrator | 2025-01-16 14:58:48 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:58:51.321445 | orchestrator | 2025-01-16 14:58:48 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:58:51.321451 | orchestrator | 2025-01-16 14:58:48 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:58:51.321457 | orchestrator | 2025-01-16 14:58:48 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:58:51.321482 | orchestrator | 2025-01-16 14:58:51 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:58:51.321527 | orchestrator | 2025-01-16 14:58:51 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:58:51.321537 | orchestrator | 2025-01-16 14:58:51 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:58:51.321799 | orchestrator | 2025-01-16 14:58:51 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:58:51.322355 | orchestrator | 2025-01-16 14:58:51 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:58:51.323073 | orchestrator | 2025-01-16 14:58:51 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:58:54.351729 | orchestrator | 2025-01-16 14:58:51 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:58:54.351931 | orchestrator | 2025-01-16 14:58:54 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:58:54.353441 | orchestrator | 2025-01-16 14:58:54 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:58:54.353502 | orchestrator | 2025-01-16 14:58:54 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:58:54.353772 | orchestrator | 2025-01-16 14:58:54 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:58:54.354208 | orchestrator | 2025-01-16 14:58:54 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:58:54.354705 | orchestrator | 2025-01-16 14:58:54 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:58:54.354831 | orchestrator | 2025-01-16 14:58:54 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:58:57.404571 | orchestrator | 2025-01-16 14:58:57 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:58:57.405579 | orchestrator | 2025-01-16 14:58:57 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:58:57.405660 | orchestrator | 2025-01-16 14:58:57 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:58:57.405701 | orchestrator | 2025-01-16 14:58:57 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:58:57.406147 | orchestrator | 2025-01-16 14:58:57 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:58:57.407345 | orchestrator | 2025-01-16 14:58:57 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:00.442592 | orchestrator | 2025-01-16 14:58:57 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:00.442825 | orchestrator | 2025-01-16 14:59:00 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:00.442995 | orchestrator | 2025-01-16 14:59:00 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:00.446619 | orchestrator | 2025-01-16 14:59:00 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:59:00.446895 | orchestrator | 2025-01-16 14:59:00 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:00.447258 | orchestrator | 2025-01-16 14:59:00 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:00.450398 | orchestrator | 2025-01-16 14:59:00 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:03.489226 | orchestrator | 2025-01-16 14:59:00 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:03.489369 | orchestrator | 2025-01-16 14:59:03 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:03.490268 | orchestrator | 2025-01-16 14:59:03 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:03.490423 | orchestrator | 2025-01-16 14:59:03 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:59:03.490459 | orchestrator | 2025-01-16 14:59:03 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:03.490547 | orchestrator | 2025-01-16 14:59:03 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:03.492498 | orchestrator | 2025-01-16 14:59:03 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:03.493235 | orchestrator | 2025-01-16 14:59:03 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:06.572705 | orchestrator | 2025-01-16 14:59:06 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:06.574091 | orchestrator | 2025-01-16 14:59:06 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:06.577607 | orchestrator | 2025-01-16 14:59:06 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:59:06.578996 | orchestrator | 2025-01-16 14:59:06 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:06.581240 | orchestrator | 2025-01-16 14:59:06 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:06.581683 | orchestrator | 2025-01-16 14:59:06 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:06.581827 | orchestrator | 2025-01-16 14:59:06 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:09.621102 | orchestrator | 2025-01-16 14:59:09 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:09.621371 | orchestrator | 2025-01-16 14:59:09 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:09.621414 | orchestrator | 2025-01-16 14:59:09 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:59:09.622465 | orchestrator | 2025-01-16 14:59:09 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:09.622854 | orchestrator | 2025-01-16 14:59:09 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:09.624569 | orchestrator | 2025-01-16 14:59:09 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:12.661734 | orchestrator | 2025-01-16 14:59:09 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:12.661941 | orchestrator | 2025-01-16 14:59:12 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:12.662731 | orchestrator | 2025-01-16 14:59:12 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:12.662802 | orchestrator | 2025-01-16 14:59:12 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:59:12.664037 | orchestrator | 2025-01-16 14:59:12 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:12.664113 | orchestrator | 2025-01-16 14:59:12 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:12.664157 | orchestrator | 2025-01-16 14:59:12 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:12.664248 | orchestrator | 2025-01-16 14:59:12 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:15.708683 | orchestrator | 2025-01-16 14:59:15 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:15.710307 | orchestrator | 2025-01-16 14:59:15 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:15.712230 | orchestrator | 2025-01-16 14:59:15 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:59:15.716566 | orchestrator | 2025-01-16 14:59:15 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:15.716908 | orchestrator | 2025-01-16 14:59:15 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:15.719487 | orchestrator | 2025-01-16 14:59:15 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:18.800542 | orchestrator | 2025-01-16 14:59:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:18.800686 | orchestrator | 2025-01-16 14:59:18 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:21.835493 | orchestrator | 2025-01-16 14:59:18 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:21.835608 | orchestrator | 2025-01-16 14:59:18 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:59:21.835622 | orchestrator | 2025-01-16 14:59:18 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:21.835632 | orchestrator | 2025-01-16 14:59:18 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:21.835642 | orchestrator | 2025-01-16 14:59:18 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:21.835652 | orchestrator | 2025-01-16 14:59:18 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:21.835675 | orchestrator | 2025-01-16 14:59:21 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:21.839785 | orchestrator | 2025-01-16 14:59:21 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:21.840969 | orchestrator | 2025-01-16 14:59:21 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state STARTED 2025-01-16 14:59:21.850550 | orchestrator | 2025-01-16 14:59:21 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:21.854263 | orchestrator | 2025-01-16 14:59:21 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:21.854367 | orchestrator | 2025-01-16 14:59:21 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:24.890926 | orchestrator | 2025-01-16 14:59:21 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:24.891041 | orchestrator | 2025-01-16 14:59:24 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:24.892375 | orchestrator | 2025-01-16 14:59:24 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:24.893619 | orchestrator | 2025-01-16 14:59:24 | INFO  | Task c5aed7e0-9fc3-4a26-b34e-2b1151f606b4 is in state SUCCESS 2025-01-16 14:59:24.893716 | orchestrator | 2025-01-16 14:59:24.893735 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-01-16 14:59:24.893750 | orchestrator | 2025-01-16 14:59:24.893764 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-01-16 14:59:24.893778 | orchestrator | Thursday 16 January 2025 14:58:54 +0000 (0:00:09.695) 0:00:09.695 ****** 2025-01-16 14:59:24.893792 | orchestrator | changed: [testbed-node-0] 2025-01-16 14:59:24.893939 | orchestrator | changed: [testbed-manager] 2025-01-16 14:59:24.893967 | orchestrator | changed: [testbed-node-1] 2025-01-16 14:59:24.893990 | orchestrator | changed: [testbed-node-2] 2025-01-16 14:59:24.894013 | orchestrator | changed: [testbed-node-3] 2025-01-16 14:59:24.894130 | orchestrator | changed: [testbed-node-4] 2025-01-16 14:59:24.894157 | orchestrator | changed: [testbed-node-5] 2025-01-16 14:59:24.894182 | orchestrator | 2025-01-16 14:59:24.894202 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-01-16 14:59:24.894217 | orchestrator | Thursday 16 January 2025 14:59:02 +0000 (0:00:08.031) 0:00:17.727 ****** 2025-01-16 14:59:24.894235 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-01-16 14:59:24.894259 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-01-16 14:59:24.894275 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-01-16 14:59:24.894291 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-01-16 14:59:24.894306 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-01-16 14:59:24.894322 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-01-16 14:59:24.894406 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-01-16 14:59:24.894438 | orchestrator | 2025-01-16 14:59:24.894466 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-01-16 14:59:24.894488 | orchestrator | Thursday 16 January 2025 14:59:06 +0000 (0:00:04.339) 0:00:22.066 ****** 2025-01-16 14:59:24.894510 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-01-16 14:59:03.090009', 'end': '2025-01-16 14:59:03.093178', 'delta': '0:00:00.003169', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-01-16 14:59:24.894546 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-01-16 14:59:03.104627', 'end': '2025-01-16 14:59:03.108390', 'delta': '0:00:00.003763', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-01-16 14:59:24.894570 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-01-16 14:59:03.266415', 'end': '2025-01-16 14:59:03.270414', 'delta': '0:00:00.003999', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-01-16 14:59:24.894654 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-01-16 14:59:03.455460', 'end': '2025-01-16 14:59:03.458762', 'delta': '0:00:00.003302', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-01-16 14:59:24.894685 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-01-16 14:59:03.637973', 'end': '2025-01-16 14:59:03.642082', 'delta': '0:00:00.004109', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-01-16 14:59:24.894709 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-01-16 14:59:03.812207', 'end': '2025-01-16 14:59:03.815678', 'delta': '0:00:00.003471', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-01-16 14:59:24.894742 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-01-16 14:59:03.987828', 'end': '2025-01-16 14:59:03.990844', 'delta': '0:00:00.003016', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-01-16 14:59:24.894766 | orchestrator | 2025-01-16 14:59:24.894789 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-01-16 14:59:24.894813 | orchestrator | Thursday 16 January 2025 14:59:10 +0000 (0:00:03.552) 0:00:25.618 ****** 2025-01-16 14:59:24.894887 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-01-16 14:59:24.894972 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-01-16 14:59:24.895000 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-01-16 14:59:24.895024 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-01-16 14:59:24.895050 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-01-16 14:59:24.895074 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-01-16 14:59:24.895099 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-01-16 14:59:24.895122 | orchestrator | 2025-01-16 14:59:24.895148 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-01-16 14:59:24.895173 | orchestrator | Thursday 16 January 2025 14:59:14 +0000 (0:00:04.355) 0:00:29.974 ****** 2025-01-16 14:59:24.895197 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-01-16 14:59:24.895213 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-01-16 14:59:24.895227 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-01-16 14:59:24.895241 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-01-16 14:59:24.895255 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-01-16 14:59:24.895269 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-01-16 14:59:24.895283 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-01-16 14:59:24.895297 | orchestrator | 2025-01-16 14:59:24.895311 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 14:59:24.895338 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:59:24.895520 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:59:24.895546 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:59:24.895562 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:59:24.895577 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:59:24.895592 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:59:24.895606 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 14:59:24.895621 | orchestrator | 2025-01-16 14:59:24.895636 | orchestrator | 2025-01-16 14:59:24.895651 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 14:59:24.895666 | orchestrator | Thursday 16 January 2025 14:59:21 +0000 (0:00:07.091) 0:00:37.066 ****** 2025-01-16 14:59:24.895681 | orchestrator | =============================================================================== 2025-01-16 14:59:24.895696 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 8.03s 2025-01-16 14:59:24.895711 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 7.09s 2025-01-16 14:59:24.895726 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 4.36s 2025-01-16 14:59:24.895740 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 4.34s 2025-01-16 14:59:24.895755 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.55s 2025-01-16 14:59:24.895771 | orchestrator | 2025-01-16 14:59:24 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:24.895792 | orchestrator | 2025-01-16 14:59:24 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:24.897027 | orchestrator | 2025-01-16 14:59:24 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:24.897058 | orchestrator | 2025-01-16 14:59:24 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:24.897077 | orchestrator | 2025-01-16 14:59:24 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:27.941390 | orchestrator | 2025-01-16 14:59:27 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:27.946343 | orchestrator | 2025-01-16 14:59:27 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:27.946443 | orchestrator | 2025-01-16 14:59:27 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:27.949590 | orchestrator | 2025-01-16 14:59:27 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:27.949740 | orchestrator | 2025-01-16 14:59:27 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:27.950557 | orchestrator | 2025-01-16 14:59:27 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:30.987261 | orchestrator | 2025-01-16 14:59:27 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:30.987381 | orchestrator | 2025-01-16 14:59:30 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:30.988061 | orchestrator | 2025-01-16 14:59:30 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:30.989720 | orchestrator | 2025-01-16 14:59:30 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:30.990173 | orchestrator | 2025-01-16 14:59:30 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:30.990252 | orchestrator | 2025-01-16 14:59:30 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:30.990607 | orchestrator | 2025-01-16 14:59:30 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:34.041984 | orchestrator | 2025-01-16 14:59:30 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:34.042190 | orchestrator | 2025-01-16 14:59:34 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:34.042773 | orchestrator | 2025-01-16 14:59:34 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:34.044306 | orchestrator | 2025-01-16 14:59:34 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:34.045956 | orchestrator | 2025-01-16 14:59:34 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:34.047893 | orchestrator | 2025-01-16 14:59:34 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:34.048778 | orchestrator | 2025-01-16 14:59:34 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:34.048956 | orchestrator | 2025-01-16 14:59:34 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:37.087562 | orchestrator | 2025-01-16 14:59:37 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:37.088557 | orchestrator | 2025-01-16 14:59:37 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:37.088611 | orchestrator | 2025-01-16 14:59:37 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:37.088628 | orchestrator | 2025-01-16 14:59:37 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:37.088684 | orchestrator | 2025-01-16 14:59:37 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:37.088883 | orchestrator | 2025-01-16 14:59:37 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:40.118556 | orchestrator | 2025-01-16 14:59:37 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:40.118671 | orchestrator | 2025-01-16 14:59:40 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:40.119573 | orchestrator | 2025-01-16 14:59:40 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:40.119596 | orchestrator | 2025-01-16 14:59:40 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:40.124141 | orchestrator | 2025-01-16 14:59:40 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:40.131025 | orchestrator | 2025-01-16 14:59:40 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:43.171925 | orchestrator | 2025-01-16 14:59:40 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:43.172086 | orchestrator | 2025-01-16 14:59:40 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:43.172126 | orchestrator | 2025-01-16 14:59:43 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:43.174295 | orchestrator | 2025-01-16 14:59:43 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:43.174349 | orchestrator | 2025-01-16 14:59:43 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:43.179707 | orchestrator | 2025-01-16 14:59:43 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:43.183472 | orchestrator | 2025-01-16 14:59:43 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:43.185248 | orchestrator | 2025-01-16 14:59:43 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state STARTED 2025-01-16 14:59:43.185529 | orchestrator | 2025-01-16 14:59:43 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:46.231976 | orchestrator | 2025-01-16 14:59:46 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:46.232235 | orchestrator | 2025-01-16 14:59:46 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:46.240876 | orchestrator | 2025-01-16 14:59:46 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:46.245141 | orchestrator | 2025-01-16 14:59:46 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:46.248687 | orchestrator | 2025-01-16 14:59:46 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:46.248953 | orchestrator | 2025-01-16 14:59:46 | INFO  | Task 3d9eff6b-126d-4151-8ee2-4485e1be5441 is in state SUCCESS 2025-01-16 14:59:46.249191 | orchestrator | 2025-01-16 14:59:46 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:49.292736 | orchestrator | 2025-01-16 14:59:49 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:49.292865 | orchestrator | 2025-01-16 14:59:49 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:49.303087 | orchestrator | 2025-01-16 14:59:49 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:49.307589 | orchestrator | 2025-01-16 14:59:49 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:49.309391 | orchestrator | 2025-01-16 14:59:49 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 14:59:49.312729 | orchestrator | 2025-01-16 14:59:49 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:52.357899 | orchestrator | 2025-01-16 14:59:49 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:52.357999 | orchestrator | 2025-01-16 14:59:52 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:52.362464 | orchestrator | 2025-01-16 14:59:52 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:52.363233 | orchestrator | 2025-01-16 14:59:52 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:52.363273 | orchestrator | 2025-01-16 14:59:52 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:52.363304 | orchestrator | 2025-01-16 14:59:52 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 14:59:55.405660 | orchestrator | 2025-01-16 14:59:52 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:55.405874 | orchestrator | 2025-01-16 14:59:52 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:55.405922 | orchestrator | 2025-01-16 14:59:55 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:55.406578 | orchestrator | 2025-01-16 14:59:55 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:55.407048 | orchestrator | 2025-01-16 14:59:55 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:55.407303 | orchestrator | 2025-01-16 14:59:55 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:55.407336 | orchestrator | 2025-01-16 14:59:55 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 14:59:55.408015 | orchestrator | 2025-01-16 14:59:55 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 14:59:58.435314 | orchestrator | 2025-01-16 14:59:55 | INFO  | Wait 1 second(s) until the next check 2025-01-16 14:59:58.435454 | orchestrator | 2025-01-16 14:59:58 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 14:59:58.436376 | orchestrator | 2025-01-16 14:59:58 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 14:59:58.436437 | orchestrator | 2025-01-16 14:59:58 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 14:59:58.436574 | orchestrator | 2025-01-16 14:59:58 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 14:59:58.436898 | orchestrator | 2025-01-16 14:59:58 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 14:59:58.437280 | orchestrator | 2025-01-16 14:59:58 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:01.497148 | orchestrator | 2025-01-16 14:59:58 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:01.497304 | orchestrator | 2025-01-16 15:00:01 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 15:00:01.500455 | orchestrator | 2025-01-16 15:00:01 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 15:00:01.500553 | orchestrator | 2025-01-16 15:00:01 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 15:00:01.501608 | orchestrator | 2025-01-16 15:00:01 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:01.501721 | orchestrator | 2025-01-16 15:00:01 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:01.502559 | orchestrator | 2025-01-16 15:00:01 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:04.538263 | orchestrator | 2025-01-16 15:00:01 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:04.538529 | orchestrator | 2025-01-16 15:00:04 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 15:00:04.540452 | orchestrator | 2025-01-16 15:00:04 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 15:00:04.540504 | orchestrator | 2025-01-16 15:00:04 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 15:00:04.540675 | orchestrator | 2025-01-16 15:00:04 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:04.541039 | orchestrator | 2025-01-16 15:00:04 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:04.541537 | orchestrator | 2025-01-16 15:00:04 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:07.570881 | orchestrator | 2025-01-16 15:00:04 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:07.571091 | orchestrator | 2025-01-16 15:00:07 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 15:00:07.571170 | orchestrator | 2025-01-16 15:00:07 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 15:00:07.571182 | orchestrator | 2025-01-16 15:00:07 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 15:00:07.571194 | orchestrator | 2025-01-16 15:00:07 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:07.571401 | orchestrator | 2025-01-16 15:00:07 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:07.571751 | orchestrator | 2025-01-16 15:00:07 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:07.571878 | orchestrator | 2025-01-16 15:00:07 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:10.607713 | orchestrator | 2025-01-16 15:00:10 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 15:00:10.608495 | orchestrator | 2025-01-16 15:00:10 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 15:00:10.609483 | orchestrator | 2025-01-16 15:00:10 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 15:00:10.609686 | orchestrator | 2025-01-16 15:00:10 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:10.611662 | orchestrator | 2025-01-16 15:00:10 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:10.612493 | orchestrator | 2025-01-16 15:00:10 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:13.641381 | orchestrator | 2025-01-16 15:00:10 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:13.641573 | orchestrator | 2025-01-16 15:00:13 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 15:00:13.642204 | orchestrator | 2025-01-16 15:00:13 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state STARTED 2025-01-16 15:00:13.642283 | orchestrator | 2025-01-16 15:00:13 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 15:00:13.643773 | orchestrator | 2025-01-16 15:00:13 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:13.644600 | orchestrator | 2025-01-16 15:00:13 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:13.644731 | orchestrator | 2025-01-16 15:00:13 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:13.644892 | orchestrator | 2025-01-16 15:00:13 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:16.666555 | orchestrator | 2025-01-16 15:00:16 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 15:00:16.667410 | orchestrator | 2025-01-16 15:00:16 | INFO  | Task c778fb1a-e574-4cb5-8cff-8543bea04ff7 is in state SUCCESS 2025-01-16 15:00:16.667590 | orchestrator | 2025-01-16 15:00:16 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 15:00:16.672944 | orchestrator | 2025-01-16 15:00:16 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:16.675227 | orchestrator | 2025-01-16 15:00:16 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:16.675585 | orchestrator | 2025-01-16 15:00:16 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:19.703745 | orchestrator | 2025-01-16 15:00:16 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:19.703929 | orchestrator | 2025-01-16 15:00:19 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 15:00:19.704189 | orchestrator | 2025-01-16 15:00:19 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 15:00:19.704235 | orchestrator | 2025-01-16 15:00:19 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:19.704408 | orchestrator | 2025-01-16 15:00:19 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:19.707656 | orchestrator | 2025-01-16 15:00:19 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:22.765093 | orchestrator | 2025-01-16 15:00:19 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:22.765242 | orchestrator | 2025-01-16 15:00:22 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 15:00:22.765993 | orchestrator | 2025-01-16 15:00:22 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 15:00:22.766082 | orchestrator | 2025-01-16 15:00:22 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:22.766699 | orchestrator | 2025-01-16 15:00:22 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:22.768182 | orchestrator | 2025-01-16 15:00:22 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:25.788716 | orchestrator | 2025-01-16 15:00:22 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:25.788851 | orchestrator | 2025-01-16 15:00:25 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state STARTED 2025-01-16 15:00:25.793291 | orchestrator | 2025-01-16 15:00:25 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state STARTED 2025-01-16 15:00:25.793411 | orchestrator | 2025-01-16 15:00:25 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:25.793762 | orchestrator | 2025-01-16 15:00:25 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:25.794161 | orchestrator | 2025-01-16 15:00:25 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:25.794310 | orchestrator | 2025-01-16 15:00:25 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:28.843471 | orchestrator | 2025-01-16 15:00:28 | INFO  | Task fb8cdaa3-f571-457c-825d-fc303e7f9398 is in state SUCCESS 2025-01-16 15:00:28.845919 | orchestrator | 2025-01-16 15:00:28.845978 | orchestrator | 2025-01-16 15:00:28.845985 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-01-16 15:00:28.845992 | orchestrator | 2025-01-16 15:00:28.845998 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-01-16 15:00:28.846005 | orchestrator | Thursday 16 January 2025 14:58:51 +0000 (0:00:07.017) 0:00:07.017 ****** 2025-01-16 15:00:28.846056 | orchestrator | ok: [testbed-manager] => { 2025-01-16 15:00:28.846066 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-01-16 15:00:28.846074 | orchestrator | } 2025-01-16 15:00:28.846080 | orchestrator | 2025-01-16 15:00:28.846086 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-01-16 15:00:28.846092 | orchestrator | Thursday 16 January 2025 14:58:57 +0000 (0:00:06.020) 0:00:13.037 ****** 2025-01-16 15:00:28.846098 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:28.846105 | orchestrator | 2025-01-16 15:00:28.846110 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-01-16 15:00:28.846116 | orchestrator | Thursday 16 January 2025 14:59:01 +0000 (0:00:04.274) 0:00:17.311 ****** 2025-01-16 15:00:28.846122 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-01-16 15:00:28.846128 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-01-16 15:00:28.846134 | orchestrator | 2025-01-16 15:00:28.846139 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-01-16 15:00:28.846145 | orchestrator | Thursday 16 January 2025 14:59:05 +0000 (0:00:04.034) 0:00:21.345 ****** 2025-01-16 15:00:28.846151 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.846157 | orchestrator | 2025-01-16 15:00:28.846163 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-01-16 15:00:28.846169 | orchestrator | Thursday 16 January 2025 14:59:10 +0000 (0:00:04.396) 0:00:25.742 ****** 2025-01-16 15:00:28.846175 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.846181 | orchestrator | 2025-01-16 15:00:28.846186 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-01-16 15:00:28.846192 | orchestrator | Thursday 16 January 2025 14:59:13 +0000 (0:00:03.129) 0:00:28.871 ****** 2025-01-16 15:00:28.846198 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-01-16 15:00:28.846205 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:28.846214 | orchestrator | 2025-01-16 15:00:28.846229 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-01-16 15:00:28.846238 | orchestrator | Thursday 16 January 2025 14:59:40 +0000 (0:00:27.137) 0:00:56.009 ****** 2025-01-16 15:00:28.846246 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.846255 | orchestrator | 2025-01-16 15:00:28.846264 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:00:28.846270 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:28.846277 | orchestrator | 2025-01-16 15:00:28.846283 | orchestrator | 2025-01-16 15:00:28.846288 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:00:28.846294 | orchestrator | Thursday 16 January 2025 14:59:45 +0000 (0:00:05.254) 0:01:01.264 ****** 2025-01-16 15:00:28.846300 | orchestrator | =============================================================================== 2025-01-16 15:00:28.846305 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.14s 2025-01-16 15:00:28.846311 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 6.02s 2025-01-16 15:00:28.846316 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.25s 2025-01-16 15:00:28.846322 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.40s 2025-01-16 15:00:28.846328 | orchestrator | osism.services.homer : Create traefik external network ------------------ 4.27s 2025-01-16 15:00:28.846346 | orchestrator | osism.services.homer : Create required directories ---------------------- 4.03s 2025-01-16 15:00:28.846351 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.13s 2025-01-16 15:00:28.846357 | orchestrator | 2025-01-16 15:00:28.846363 | orchestrator | 2025-01-16 15:00:28.846368 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-01-16 15:00:28.846374 | orchestrator | 2025-01-16 15:00:28.846381 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-01-16 15:00:28.846387 | orchestrator | Thursday 16 January 2025 14:58:51 +0000 (0:00:07.881) 0:00:07.881 ****** 2025-01-16 15:00:28.846393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-01-16 15:00:28.846401 | orchestrator | 2025-01-16 15:00:28.846407 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-01-16 15:00:28.846413 | orchestrator | Thursday 16 January 2025 14:58:58 +0000 (0:00:06.100) 0:00:13.981 ****** 2025-01-16 15:00:28.846419 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-01-16 15:00:28.846426 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-01-16 15:00:28.846432 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-01-16 15:00:28.846439 | orchestrator | 2025-01-16 15:00:28.846445 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-01-16 15:00:28.846452 | orchestrator | Thursday 16 January 2025 14:59:03 +0000 (0:00:05.023) 0:00:19.005 ****** 2025-01-16 15:00:28.846458 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.846464 | orchestrator | 2025-01-16 15:00:28.846471 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-01-16 15:00:28.846477 | orchestrator | Thursday 16 January 2025 14:59:09 +0000 (0:00:06.124) 0:00:25.130 ****** 2025-01-16 15:00:28.846491 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-01-16 15:00:28.846498 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:28.846504 | orchestrator | 2025-01-16 15:00:28.846510 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-01-16 15:00:28.846517 | orchestrator | Thursday 16 January 2025 14:59:46 +0000 (0:00:37.233) 0:01:02.364 ****** 2025-01-16 15:00:28.846523 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.846529 | orchestrator | 2025-01-16 15:00:28.846535 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-01-16 15:00:28.846541 | orchestrator | Thursday 16 January 2025 14:59:48 +0000 (0:00:01.703) 0:01:04.068 ****** 2025-01-16 15:00:28.846547 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:28.846553 | orchestrator | 2025-01-16 15:00:28.846560 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-01-16 15:00:28.846566 | orchestrator | Thursday 16 January 2025 14:59:50 +0000 (0:00:02.356) 0:01:06.424 ****** 2025-01-16 15:00:28.846572 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.846578 | orchestrator | 2025-01-16 15:00:28.846585 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-01-16 15:00:28.846591 | orchestrator | Thursday 16 January 2025 14:59:56 +0000 (0:00:05.922) 0:01:12.346 ****** 2025-01-16 15:00:28.846597 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.846603 | orchestrator | 2025-01-16 15:00:28.846609 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-01-16 15:00:28.846619 | orchestrator | Thursday 16 January 2025 15:00:05 +0000 (0:00:09.314) 0:01:21.661 ****** 2025-01-16 15:00:28.846626 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.846632 | orchestrator | 2025-01-16 15:00:28.846638 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-01-16 15:00:28.846644 | orchestrator | Thursday 16 January 2025 15:00:09 +0000 (0:00:03.317) 0:01:24.978 ****** 2025-01-16 15:00:28.846650 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:28.846660 | orchestrator | 2025-01-16 15:00:28.846666 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:00:28.846673 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:28.846679 | orchestrator | 2025-01-16 15:00:28.846685 | orchestrator | 2025-01-16 15:00:28.846692 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:00:28.846698 | orchestrator | Thursday 16 January 2025 15:00:13 +0000 (0:00:04.103) 0:01:29.082 ****** 2025-01-16 15:00:28.846704 | orchestrator | =============================================================================== 2025-01-16 15:00:28.846710 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.23s 2025-01-16 15:00:28.846717 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 9.31s 2025-01-16 15:00:28.846723 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 6.12s 2025-01-16 15:00:28.846729 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 6.10s 2025-01-16 15:00:28.846735 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 5.92s 2025-01-16 15:00:28.846741 | orchestrator | osism.services.openstackclient : Create required directories ------------ 5.02s 2025-01-16 15:00:28.846748 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 4.10s 2025-01-16 15:00:28.846757 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 3.32s 2025-01-16 15:00:28.846766 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.36s 2025-01-16 15:00:28.846790 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.70s 2025-01-16 15:00:28.846799 | orchestrator | 2025-01-16 15:00:28.846808 | orchestrator | 2025-01-16 15:00:28.846816 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-01-16 15:00:28.846825 | orchestrator | 2025-01-16 15:00:28.846833 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-01-16 15:00:28.846842 | orchestrator | Thursday 16 January 2025 14:58:41 +0000 (0:00:00.228) 0:00:00.228 ****** 2025-01-16 15:00:28.846852 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:00:28.846861 | orchestrator | 2025-01-16 15:00:28.846870 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-01-16 15:00:28.846876 | orchestrator | Thursday 16 January 2025 14:58:42 +0000 (0:00:01.146) 0:00:01.374 ****** 2025-01-16 15:00:28.846882 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-01-16 15:00:28.846888 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-01-16 15:00:28.846893 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-01-16 15:00:28.846898 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-01-16 15:00:28.846904 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-01-16 15:00:28.846910 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-01-16 15:00:28.846915 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-01-16 15:00:28.846921 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-01-16 15:00:28.846926 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-01-16 15:00:28.846932 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-01-16 15:00:28.846941 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-01-16 15:00:28.846951 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-01-16 15:00:28.846957 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-01-16 15:00:28.846967 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-01-16 15:00:28.846973 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-01-16 15:00:28.846979 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-01-16 15:00:28.846984 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-01-16 15:00:28.846990 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-01-16 15:00:28.846995 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-01-16 15:00:28.847001 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-01-16 15:00:28.847007 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-01-16 15:00:28.847012 | orchestrator | 2025-01-16 15:00:28.847018 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-01-16 15:00:28.847024 | orchestrator | Thursday 16 January 2025 14:58:45 +0000 (0:00:03.177) 0:00:04.552 ****** 2025-01-16 15:00:28.847029 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:00:28.847037 | orchestrator | 2025-01-16 15:00:28.847042 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-01-16 15:00:28.847048 | orchestrator | Thursday 16 January 2025 14:58:46 +0000 (0:00:01.332) 0:00:05.885 ****** 2025-01-16 15:00:28.847056 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.847064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.847070 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.847076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.847082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.847094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.847101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.847107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847136 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847142 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847193 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847202 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.847208 | orchestrator | 2025-01-16 15:00:28.847214 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-01-16 15:00:28.847220 | orchestrator | Thursday 16 January 2025 14:58:49 +0000 (0:00:03.186) 0:00:09.072 ****** 2025-01-16 15:00:28.847225 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847231 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847239 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847266 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:00:28.847275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847310 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:00:28.847322 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:00:28.847328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847487 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:00:28.847495 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:00:28.847502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847520 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:00:28.847526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847548 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:00:28.847554 | orchestrator | 2025-01-16 15:00:28.847560 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-01-16 15:00:28.847565 | orchestrator | Thursday 16 January 2025 14:58:51 +0000 (0:00:01.257) 0:00:10.329 ****** 2025-01-16 15:00:28.847574 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847580 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847591 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847618 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:00:28.847624 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:00:28.847630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847696 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:00:28.847702 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:00:28.847707 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:00:28.847715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847733 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:00:28.847739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-01-16 15:00:28.847750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.847790 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:00:28.847799 | orchestrator | 2025-01-16 15:00:28.847808 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-01-16 15:00:28.847817 | orchestrator | Thursday 16 January 2025 14:58:53 +0000 (0:00:01.966) 0:00:12.296 ****** 2025-01-16 15:00:28.847826 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:00:28.847834 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:00:28.847846 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:00:28.847852 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:00:28.847857 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:00:28.847863 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:00:28.847868 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:00:28.847874 | orchestrator | 2025-01-16 15:00:28.847880 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-01-16 15:00:28.847885 | orchestrator | Thursday 16 January 2025 14:58:53 +0000 (0:00:00.776) 0:00:13.072 ****** 2025-01-16 15:00:28.847891 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:00:28.847896 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:00:28.847902 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:00:28.847907 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:00:28.847913 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:00:28.847918 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:00:28.847924 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:00:28.847930 | orchestrator | 2025-01-16 15:00:28.847935 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-01-16 15:00:28.847941 | orchestrator | Thursday 16 January 2025 14:58:54 +0000 (0:00:00.650) 0:00:13.723 ****** 2025-01-16 15:00:28.847947 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:00:28.847952 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:28.847958 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:28.847963 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:28.847969 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:28.847975 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:28.847980 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.847986 | orchestrator | 2025-01-16 15:00:28.847995 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-01-16 15:00:28.848004 | orchestrator | Thursday 16 January 2025 14:59:10 +0000 (0:00:16.126) 0:00:29.849 ****** 2025-01-16 15:00:28.848012 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:00:28.848023 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:28.848062 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:00:28.848071 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:00:28.848079 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:00:28.848095 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:00:28.848104 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:00:28.848113 | orchestrator | 2025-01-16 15:00:28.848122 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-01-16 15:00:28.848132 | orchestrator | Thursday 16 January 2025 14:59:12 +0000 (0:00:02.206) 0:00:32.056 ****** 2025-01-16 15:00:28.848140 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:28.848149 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:00:28.848158 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:00:28.848167 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:00:28.848176 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:00:28.848184 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:00:28.848193 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:00:28.848202 | orchestrator | 2025-01-16 15:00:28.848210 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-01-16 15:00:28.848219 | orchestrator | Thursday 16 January 2025 14:59:13 +0000 (0:00:00.952) 0:00:33.009 ****** 2025-01-16 15:00:28.848229 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:00:28.848237 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:00:28.848246 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:00:28.848256 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:00:28.848264 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:00:28.848273 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:00:28.848282 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:00:28.848291 | orchestrator | 2025-01-16 15:00:28.848302 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-01-16 15:00:28.848310 | orchestrator | Thursday 16 January 2025 14:59:14 +0000 (0:00:00.888) 0:00:33.897 ****** 2025-01-16 15:00:28.848319 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:00:28.848328 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:00:28.848336 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:00:28.848344 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:00:28.848354 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:00:28.848362 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:00:28.848371 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:00:28.848396 | orchestrator | 2025-01-16 15:00:28.848402 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-01-16 15:00:28.848408 | orchestrator | Thursday 16 January 2025 14:59:15 +0000 (0:00:01.083) 0:00:34.981 ****** 2025-01-16 15:00:28.848415 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.848422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.848428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.848442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.848462 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848486 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.848508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.848532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.848538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.848596 | orchestrator | 2025-01-16 15:00:28.848606 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-01-16 15:00:28.848615 | orchestrator | Thursday 16 January 2025 14:59:21 +0000 (0:00:05.264) 0:00:40.245 ****** 2025-01-16 15:00:28.848625 | orchestrator | [WARNING]: Skipped 2025-01-16 15:00:28.848633 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-01-16 15:00:28.848641 | orchestrator | to this access issue: 2025-01-16 15:00:28.848654 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-01-16 15:00:28.848663 | orchestrator | directory 2025-01-16 15:00:28.848672 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 15:00:28.848680 | orchestrator | 2025-01-16 15:00:28.848689 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-01-16 15:00:28.848696 | orchestrator | Thursday 16 January 2025 14:59:22 +0000 (0:00:00.873) 0:00:41.119 ****** 2025-01-16 15:00:28.848704 | orchestrator | [WARNING]: Skipped 2025-01-16 15:00:28.848712 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-01-16 15:00:28.848720 | orchestrator | to this access issue: 2025-01-16 15:00:28.848728 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-01-16 15:00:28.848737 | orchestrator | directory 2025-01-16 15:00:28.848746 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 15:00:28.848756 | orchestrator | 2025-01-16 15:00:28.848766 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-01-16 15:00:28.848818 | orchestrator | Thursday 16 January 2025 14:59:22 +0000 (0:00:00.567) 0:00:41.686 ****** 2025-01-16 15:00:28.848829 | orchestrator | [WARNING]: Skipped 2025-01-16 15:00:28.848835 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-01-16 15:00:28.848840 | orchestrator | to this access issue: 2025-01-16 15:00:28.848847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-01-16 15:00:28.848853 | orchestrator | directory 2025-01-16 15:00:28.848859 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 15:00:28.848865 | orchestrator | 2025-01-16 15:00:28.848876 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-01-16 15:00:28.848882 | orchestrator | Thursday 16 January 2025 14:59:23 +0000 (0:00:00.622) 0:00:42.309 ****** 2025-01-16 15:00:28.848887 | orchestrator | [WARNING]: Skipped 2025-01-16 15:00:28.848893 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-01-16 15:00:28.848900 | orchestrator | to this access issue: 2025-01-16 15:00:28.848910 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-01-16 15:00:28.848919 | orchestrator | directory 2025-01-16 15:00:28.848928 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 15:00:28.848936 | orchestrator | 2025-01-16 15:00:28.848945 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-01-16 15:00:28.848954 | orchestrator | Thursday 16 January 2025 14:59:24 +0000 (0:00:01.036) 0:00:43.345 ****** 2025-01-16 15:00:28.848963 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.848972 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:28.848980 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:28.848989 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:28.848997 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:28.849006 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:28.849015 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:28.849022 | orchestrator | 2025-01-16 15:00:28.849030 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-01-16 15:00:28.849040 | orchestrator | Thursday 16 January 2025 14:59:28 +0000 (0:00:04.412) 0:00:47.758 ****** 2025-01-16 15:00:28.849050 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-01-16 15:00:28.849059 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-01-16 15:00:28.849068 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-01-16 15:00:28.849077 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-01-16 15:00:28.849085 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-01-16 15:00:28.849093 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-01-16 15:00:28.849102 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-01-16 15:00:28.849111 | orchestrator | 2025-01-16 15:00:28.849119 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-01-16 15:00:28.849128 | orchestrator | Thursday 16 January 2025 14:59:31 +0000 (0:00:03.260) 0:00:51.018 ****** 2025-01-16 15:00:28.849137 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.849145 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:28.849155 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:28.849163 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:28.849173 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:28.849189 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:28.850750 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:28.850805 | orchestrator | 2025-01-16 15:00:28.850816 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-01-16 15:00:28.850826 | orchestrator | Thursday 16 January 2025 14:59:34 +0000 (0:00:02.683) 0:00:53.702 ****** 2025-01-16 15:00:28.850837 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.850860 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.850871 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.850877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.850895 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.850905 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.850911 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.850917 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.850923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.850932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.850937 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.850943 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.850952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.850958 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.850964 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.850970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.850980 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.850985 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.850991 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.851000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:00:28.851009 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851018 | orchestrator | 2025-01-16 15:00:28.851027 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-01-16 15:00:28.851036 | orchestrator | Thursday 16 January 2025 14:59:36 +0000 (0:00:02.229) 0:00:55.931 ****** 2025-01-16 15:00:28.851044 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-01-16 15:00:28.851053 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-01-16 15:00:28.851064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-01-16 15:00:28.851072 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-01-16 15:00:28.851080 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-01-16 15:00:28.851088 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-01-16 15:00:28.851096 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-01-16 15:00:28.851104 | orchestrator | 2025-01-16 15:00:28.851117 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-01-16 15:00:28.851126 | orchestrator | Thursday 16 January 2025 14:59:38 +0000 (0:00:02.084) 0:00:58.016 ****** 2025-01-16 15:00:28.851133 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-01-16 15:00:28.851142 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-01-16 15:00:28.851151 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-01-16 15:00:28.851158 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-01-16 15:00:28.851166 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-01-16 15:00:28.851173 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-01-16 15:00:28.851182 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-01-16 15:00:28.851190 | orchestrator | 2025-01-16 15:00:28.851203 | orchestrator | TASK [common : Check common containers] **************************************** 2025-01-16 15:00:28.851212 | orchestrator | Thursday 16 January 2025 14:59:41 +0000 (0:00:03.083) 0:01:01.100 ****** 2025-01-16 15:00:28.851220 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.851230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.851238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.851249 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.851270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.851281 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.851331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-01-16 15:00:28.851343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:00:28.851378 | orchestrator | 2025-01-16 15:00:28.851384 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-01-16 15:00:28.851390 | orchestrator | Thursday 16 January 2025 14:59:45 +0000 (0:00:03.880) 0:01:04.980 ****** 2025-01-16 15:00:28.851396 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.851402 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:28.851408 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:28.851414 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:28.851420 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:28.851426 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:28.851432 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:28.851438 | orchestrator | 2025-01-16 15:00:28.851444 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-01-16 15:00:28.851450 | orchestrator | Thursday 16 January 2025 14:59:47 +0000 (0:00:01.837) 0:01:06.817 ****** 2025-01-16 15:00:28.851456 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.851461 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:28.851467 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:28.851473 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:28.851479 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:28.851485 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:28.851491 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:28.851497 | orchestrator | 2025-01-16 15:00:28.851503 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-01-16 15:00:28.851509 | orchestrator | Thursday 16 January 2025 14:59:49 +0000 (0:00:01.849) 0:01:08.666 ****** 2025-01-16 15:00:28.851515 | orchestrator | 2025-01-16 15:00:28.851521 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-01-16 15:00:28.851527 | orchestrator | Thursday 16 January 2025 14:59:49 +0000 (0:00:00.053) 0:01:08.719 ****** 2025-01-16 15:00:28.851533 | orchestrator | 2025-01-16 15:00:28.851539 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-01-16 15:00:28.851545 | orchestrator | Thursday 16 January 2025 14:59:49 +0000 (0:00:00.049) 0:01:08.769 ****** 2025-01-16 15:00:28.851551 | orchestrator | 2025-01-16 15:00:28.851557 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-01-16 15:00:28.851563 | orchestrator | Thursday 16 January 2025 14:59:49 +0000 (0:00:00.048) 0:01:08.818 ****** 2025-01-16 15:00:28.851569 | orchestrator | 2025-01-16 15:00:28.851575 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-01-16 15:00:28.851580 | orchestrator | Thursday 16 January 2025 14:59:49 +0000 (0:00:00.183) 0:01:09.002 ****** 2025-01-16 15:00:28.851586 | orchestrator | 2025-01-16 15:00:28.851592 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-01-16 15:00:28.851598 | orchestrator | Thursday 16 January 2025 14:59:49 +0000 (0:00:00.052) 0:01:09.054 ****** 2025-01-16 15:00:28.851604 | orchestrator | 2025-01-16 15:00:28.851610 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-01-16 15:00:28.851616 | orchestrator | Thursday 16 January 2025 14:59:49 +0000 (0:00:00.048) 0:01:09.102 ****** 2025-01-16 15:00:28.851622 | orchestrator | 2025-01-16 15:00:28.851628 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-01-16 15:00:28.851634 | orchestrator | Thursday 16 January 2025 14:59:50 +0000 (0:00:00.206) 0:01:09.309 ****** 2025-01-16 15:00:28.851643 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:28.851649 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:28.851655 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:28.851661 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:28.851667 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:28.851673 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.851679 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:28.851685 | orchestrator | 2025-01-16 15:00:28.851691 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-01-16 15:00:28.851697 | orchestrator | Thursday 16 January 2025 14:59:58 +0000 (0:00:08.417) 0:01:17.727 ****** 2025-01-16 15:00:28.851703 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:28.851709 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:28.851715 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:28.851721 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:28.851727 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.851733 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:28.851738 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:28.851744 | orchestrator | 2025-01-16 15:00:28.851749 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-01-16 15:00:28.851754 | orchestrator | Thursday 16 January 2025 15:00:15 +0000 (0:00:16.927) 0:01:34.654 ****** 2025-01-16 15:00:28.851760 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:00:28.851767 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:00:28.859331 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:00:28.859408 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:28.859422 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:00:28.859445 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:00:28.859455 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:00:28.859465 | orchestrator | 2025-01-16 15:00:28.859476 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-01-16 15:00:28.859487 | orchestrator | Thursday 16 January 2025 15:00:17 +0000 (0:00:02.299) 0:01:36.954 ****** 2025-01-16 15:00:28.859497 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:28.859507 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:28.859516 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:28.859527 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:28.859535 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:28.859541 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:28.859547 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:28.859568 | orchestrator | 2025-01-16 15:00:28.859575 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:00:28.859582 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:00:28.859590 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:00:28.859597 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:00:28.859603 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:00:28.859609 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:00:28.859615 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:00:28.859621 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:00:28.859627 | orchestrator | 2025-01-16 15:00:28.859648 | orchestrator | 2025-01-16 15:00:28.859654 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:00:28.859660 | orchestrator | Thursday 16 January 2025 15:00:27 +0000 (0:00:09.403) 0:01:46.358 ****** 2025-01-16 15:00:28.859667 | orchestrator | =============================================================================== 2025-01-16 15:00:28.859673 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 16.93s 2025-01-16 15:00:28.859679 | orchestrator | common : Ensure fluentd image is present for label check --------------- 16.13s 2025-01-16 15:00:28.859685 | orchestrator | common : Restart cron container ----------------------------------------- 9.40s 2025-01-16 15:00:28.859691 | orchestrator | common : Restart fluentd container -------------------------------------- 8.42s 2025-01-16 15:00:28.859697 | orchestrator | common : Copying over config.json files for services -------------------- 5.26s 2025-01-16 15:00:28.859703 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.41s 2025-01-16 15:00:28.859709 | orchestrator | common : Check common containers ---------------------------------------- 3.88s 2025-01-16 15:00:28.859715 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.26s 2025-01-16 15:00:28.859721 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.19s 2025-01-16 15:00:28.859727 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.18s 2025-01-16 15:00:28.859733 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.08s 2025-01-16 15:00:28.859739 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.68s 2025-01-16 15:00:28.859745 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.30s 2025-01-16 15:00:28.859751 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.23s 2025-01-16 15:00:28.859757 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.21s 2025-01-16 15:00:28.859763 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.08s 2025-01-16 15:00:28.859803 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.97s 2025-01-16 15:00:28.859811 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.85s 2025-01-16 15:00:28.859817 | orchestrator | common : Creating log volume -------------------------------------------- 1.84s 2025-01-16 15:00:28.859823 | orchestrator | common : include_tasks -------------------------------------------------- 1.33s 2025-01-16 15:00:28.859840 | orchestrator | 2025-01-16 15:00:28 | INFO  | Task a8762f20-fe23-4bf4-8143-0c8f90404c45 is in state SUCCESS 2025-01-16 15:00:28.859887 | orchestrator | 2025-01-16 15:00:28 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:28.859895 | orchestrator | 2025-01-16 15:00:28 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:28.859901 | orchestrator | 2025-01-16 15:00:28 | INFO  | Task 5986cec7-7083-42cb-9ee8-d91b3736b4d2 is in state STARTED 2025-01-16 15:00:28.859910 | orchestrator | 2025-01-16 15:00:28 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:28.860047 | orchestrator | 2025-01-16 15:00:28 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:28.863538 | orchestrator | 2025-01-16 15:00:28 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:28.866651 | orchestrator | 2025-01-16 15:00:28 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:31.901718 | orchestrator | 2025-01-16 15:00:28 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:31.901941 | orchestrator | 2025-01-16 15:00:31 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:31.902090 | orchestrator | 2025-01-16 15:00:31 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:31.902146 | orchestrator | 2025-01-16 15:00:31 | INFO  | Task 5986cec7-7083-42cb-9ee8-d91b3736b4d2 is in state STARTED 2025-01-16 15:00:31.907058 | orchestrator | 2025-01-16 15:00:31 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:31.908484 | orchestrator | 2025-01-16 15:00:31 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:31.910279 | orchestrator | 2025-01-16 15:00:31 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:31.910421 | orchestrator | 2025-01-16 15:00:31 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:31.910453 | orchestrator | 2025-01-16 15:00:31 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:34.936450 | orchestrator | 2025-01-16 15:00:34 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:34.939576 | orchestrator | 2025-01-16 15:00:34 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:34.939664 | orchestrator | 2025-01-16 15:00:34 | INFO  | Task 5986cec7-7083-42cb-9ee8-d91b3736b4d2 is in state STARTED 2025-01-16 15:00:34.943555 | orchestrator | 2025-01-16 15:00:34 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:34.944163 | orchestrator | 2025-01-16 15:00:34 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state STARTED 2025-01-16 15:00:34.944226 | orchestrator | 2025-01-16 15:00:34 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:34.944629 | orchestrator | 2025-01-16 15:00:34 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:37.969336 | orchestrator | 2025-01-16 15:00:34 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:37.969459 | orchestrator | 2025-01-16 15:00:37 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:37.969657 | orchestrator | 2025-01-16 15:00:37 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:37.969676 | orchestrator | 2025-01-16 15:00:37 | INFO  | Task 5986cec7-7083-42cb-9ee8-d91b3736b4d2 is in state STARTED 2025-01-16 15:00:37.969687 | orchestrator | 2025-01-16 15:00:37 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:37.969702 | orchestrator | 2025-01-16 15:00:37 | INFO  | Task 443b3e12-ce2c-49f4-a9af-e72916dbf1f4 is in state SUCCESS 2025-01-16 15:00:37.970462 | orchestrator | 2025-01-16 15:00:37.970501 | orchestrator | 2025-01-16 15:00:37.970518 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-01-16 15:00:37.970535 | orchestrator | 2025-01-16 15:00:37.970552 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-01-16 15:00:37.970568 | orchestrator | Thursday 16 January 2025 14:59:30 +0000 (0:00:03.665) 0:00:03.665 ****** 2025-01-16 15:00:37.970583 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:37.970616 | orchestrator | 2025-01-16 15:00:37.970626 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-01-16 15:00:37.970644 | orchestrator | Thursday 16 January 2025 14:59:32 +0000 (0:00:02.262) 0:00:05.927 ****** 2025-01-16 15:00:37.970655 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-01-16 15:00:37.970665 | orchestrator | 2025-01-16 15:00:37.970675 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-01-16 15:00:37.970685 | orchestrator | Thursday 16 January 2025 14:59:34 +0000 (0:00:01.437) 0:00:07.364 ****** 2025-01-16 15:00:37.970695 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:37.970705 | orchestrator | 2025-01-16 15:00:37.970714 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-01-16 15:00:37.970744 | orchestrator | Thursday 16 January 2025 14:59:36 +0000 (0:00:02.077) 0:00:09.442 ****** 2025-01-16 15:00:37.970754 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-01-16 15:00:37.970800 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:37.970811 | orchestrator | 2025-01-16 15:00:37.970821 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-01-16 15:00:37.970843 | orchestrator | Thursday 16 January 2025 15:00:21 +0000 (0:00:45.034) 0:00:54.477 ****** 2025-01-16 15:00:37.970853 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:37.970862 | orchestrator | 2025-01-16 15:00:37.970872 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:00:37.970882 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:37.970893 | orchestrator | 2025-01-16 15:00:37.970902 | orchestrator | 2025-01-16 15:00:37.970912 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:00:37.970921 | orchestrator | Thursday 16 January 2025 15:00:25 +0000 (0:00:04.461) 0:00:58.938 ****** 2025-01-16 15:00:37.970930 | orchestrator | =============================================================================== 2025-01-16 15:00:37.970940 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 45.03s 2025-01-16 15:00:37.970950 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.46s 2025-01-16 15:00:37.970959 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.26s 2025-01-16 15:00:37.970969 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.08s 2025-01-16 15:00:37.970978 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.44s 2025-01-16 15:00:37.970988 | orchestrator | 2025-01-16 15:00:37.970997 | orchestrator | 2025-01-16 15:00:37.971006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:00:37.971015 | orchestrator | 2025-01-16 15:00:37.971025 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:00:37.971034 | orchestrator | Thursday 16 January 2025 14:58:50 +0000 (0:00:05.402) 0:00:05.402 ****** 2025-01-16 15:00:37.971043 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-01-16 15:00:37.971053 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-01-16 15:00:37.971063 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-01-16 15:00:37.971072 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-01-16 15:00:37.971082 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-01-16 15:00:37.971091 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-01-16 15:00:37.971100 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-01-16 15:00:37.971110 | orchestrator | 2025-01-16 15:00:37.971119 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-01-16 15:00:37.971128 | orchestrator | 2025-01-16 15:00:37.971138 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-01-16 15:00:37.971147 | orchestrator | Thursday 16 January 2025 14:58:59 +0000 (0:00:09.493) 0:00:14.895 ****** 2025-01-16 15:00:37.971159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:00:37.971171 | orchestrator | 2025-01-16 15:00:37.971180 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-01-16 15:00:37.971189 | orchestrator | Thursday 16 January 2025 14:59:04 +0000 (0:00:05.296) 0:00:20.192 ****** 2025-01-16 15:00:37.971199 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:00:37.971208 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:00:37.971217 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:00:37.971234 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:37.971244 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:00:37.971253 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:00:37.971262 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:00:37.971272 | orchestrator | 2025-01-16 15:00:37.971281 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-01-16 15:00:37.971291 | orchestrator | Thursday 16 January 2025 14:59:10 +0000 (0:00:05.678) 0:00:25.870 ****** 2025-01-16 15:00:37.971300 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:00:37.971309 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:00:37.971318 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:00:37.971328 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:00:37.971337 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:00:37.971384 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:00:37.971394 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:37.971404 | orchestrator | 2025-01-16 15:00:37.971422 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-01-16 15:00:37.971432 | orchestrator | Thursday 16 January 2025 14:59:15 +0000 (0:00:04.799) 0:00:30.670 ****** 2025-01-16 15:00:37.971441 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:37.971451 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:37.971460 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:37.971470 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:37.971479 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:37.971488 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:37.971497 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:37.971507 | orchestrator | 2025-01-16 15:00:37.971516 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-01-16 15:00:37.971525 | orchestrator | Thursday 16 January 2025 14:59:21 +0000 (0:00:06.318) 0:00:36.988 ****** 2025-01-16 15:00:37.971535 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:37.971544 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:37.971553 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:37.971562 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:37.971572 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:37.971581 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:37.971590 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:37.971600 | orchestrator | 2025-01-16 15:00:37.971610 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-01-16 15:00:37.971619 | orchestrator | Thursday 16 January 2025 14:59:29 +0000 (0:00:07.883) 0:00:44.872 ****** 2025-01-16 15:00:37.971628 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:37.971638 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:37.971647 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:37.971656 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:37.971666 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:37.971675 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:37.971685 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:37.971694 | orchestrator | 2025-01-16 15:00:37.971703 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-01-16 15:00:37.971713 | orchestrator | Thursday 16 January 2025 14:59:42 +0000 (0:00:13.253) 0:00:58.126 ****** 2025-01-16 15:00:37.971723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:00:37.971738 | orchestrator | 2025-01-16 15:00:37.971747 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-01-16 15:00:37.971757 | orchestrator | Thursday 16 January 2025 14:59:46 +0000 (0:00:04.078) 0:01:02.205 ****** 2025-01-16 15:00:37.971798 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-01-16 15:00:37.971808 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-01-16 15:00:37.971818 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-01-16 15:00:37.971833 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-01-16 15:00:37.971843 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-01-16 15:00:37.971852 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-01-16 15:00:37.971862 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-01-16 15:00:37.971871 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-01-16 15:00:37.971881 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-01-16 15:00:37.971890 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-01-16 15:00:37.971899 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-01-16 15:00:37.971909 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-01-16 15:00:37.971918 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-01-16 15:00:37.971927 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-01-16 15:00:37.971936 | orchestrator | 2025-01-16 15:00:37.971946 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-01-16 15:00:37.971956 | orchestrator | Thursday 16 January 2025 14:59:58 +0000 (0:00:11.549) 0:01:13.754 ****** 2025-01-16 15:00:37.971966 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:37.971976 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:00:37.971985 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:00:37.971999 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:00:37.972008 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:00:37.972017 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:00:37.972027 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:00:37.972036 | orchestrator | 2025-01-16 15:00:37.972046 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-01-16 15:00:37.972059 | orchestrator | Thursday 16 January 2025 15:00:08 +0000 (0:00:09.689) 0:01:23.443 ****** 2025-01-16 15:00:37.972069 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:37.972078 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:37.972087 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:37.972097 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:37.972106 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:37.972115 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:37.972125 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:37.972134 | orchestrator | 2025-01-16 15:00:37.972143 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-01-16 15:00:37.972153 | orchestrator | Thursday 16 January 2025 15:00:12 +0000 (0:00:04.577) 0:01:28.020 ****** 2025-01-16 15:00:37.972162 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:00:37.972172 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:37.972181 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:00:37.972191 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:00:37.972200 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:00:37.972210 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:00:37.972219 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:00:37.972228 | orchestrator | 2025-01-16 15:00:37.972238 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-01-16 15:00:37.972248 | orchestrator | Thursday 16 January 2025 15:00:15 +0000 (0:00:02.719) 0:01:30.740 ****** 2025-01-16 15:00:37.972257 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:00:37.972266 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:00:37.972282 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:00:37.972383 | orchestrator | ok: [testbed-manager] 2025-01-16 15:00:37.972397 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:00:37.972407 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:00:37.972416 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:00:37.972425 | orchestrator | 2025-01-16 15:00:37.972439 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-01-16 15:00:37.972455 | orchestrator | Thursday 16 January 2025 15:00:20 +0000 (0:00:04.727) 0:01:35.467 ****** 2025-01-16 15:00:37.972469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-01-16 15:00:37.972500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:00:37.972516 | orchestrator | 2025-01-16 15:00:37.972531 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-01-16 15:00:37.972546 | orchestrator | Thursday 16 January 2025 15:00:24 +0000 (0:00:04.675) 0:01:40.142 ****** 2025-01-16 15:00:37.972561 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:37.972576 | orchestrator | 2025-01-16 15:00:37.972592 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-01-16 15:00:37.972607 | orchestrator | Thursday 16 January 2025 15:00:27 +0000 (0:00:02.898) 0:01:43.041 ****** 2025-01-16 15:00:37.972623 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:00:37.972633 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:00:37.972643 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:00:37.972653 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:00:37.972662 | orchestrator | changed: [testbed-manager] 2025-01-16 15:00:37.972671 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:00:37.972680 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:00:37.972689 | orchestrator | 2025-01-16 15:00:37.972699 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:00:37.972708 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:37.972718 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:37.972733 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:37.972742 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:37.972754 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:37.972790 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:37.972804 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:00:37.972815 | orchestrator | 2025-01-16 15:00:37.972826 | orchestrator | 2025-01-16 15:00:37.972838 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:00:37.972849 | orchestrator | Thursday 16 January 2025 15:00:35 +0000 (0:00:07.647) 0:01:50.689 ****** 2025-01-16 15:00:37.972860 | orchestrator | =============================================================================== 2025-01-16 15:00:37.972871 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 13.25s 2025-01-16 15:00:37.972883 | orchestrator | osism.services.netdata : Copy configuration files ---------------------- 11.55s 2025-01-16 15:00:37.972894 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 9.69s 2025-01-16 15:00:37.972905 | orchestrator | Group hosts based on enabled services ----------------------------------- 9.49s 2025-01-16 15:00:37.972916 | orchestrator | osism.services.netdata : Add repository --------------------------------- 7.88s 2025-01-16 15:00:37.972928 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 7.65s 2025-01-16 15:00:37.972939 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 6.32s 2025-01-16 15:00:37.972950 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 5.68s 2025-01-16 15:00:37.972961 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 5.30s 2025-01-16 15:00:37.972979 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.80s 2025-01-16 15:00:37.972991 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 4.73s 2025-01-16 15:00:37.973002 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 4.68s 2025-01-16 15:00:37.973013 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 4.58s 2025-01-16 15:00:37.973026 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 4.08s 2025-01-16 15:00:37.973038 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.90s 2025-01-16 15:00:37.973052 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.72s 2025-01-16 15:00:37.973064 | orchestrator | 2025-01-16 15:00:37 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:37.973083 | orchestrator | 2025-01-16 15:00:37 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:40.999445 | orchestrator | 2025-01-16 15:00:37 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:40.999552 | orchestrator | 2025-01-16 15:00:40 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:41.003344 | orchestrator | 2025-01-16 15:00:40 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:41.005274 | orchestrator | 2025-01-16 15:00:40 | INFO  | Task 5986cec7-7083-42cb-9ee8-d91b3736b4d2 is in state STARTED 2025-01-16 15:00:41.005392 | orchestrator | 2025-01-16 15:00:40 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:41.005444 | orchestrator | 2025-01-16 15:00:41 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:41.006237 | orchestrator | 2025-01-16 15:00:41 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:44.049387 | orchestrator | 2025-01-16 15:00:41 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:44.049889 | orchestrator | 2025-01-16 15:00:44 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:47.094424 | orchestrator | 2025-01-16 15:00:44 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:47.094545 | orchestrator | 2025-01-16 15:00:44 | INFO  | Task 5986cec7-7083-42cb-9ee8-d91b3736b4d2 is in state STARTED 2025-01-16 15:00:47.094566 | orchestrator | 2025-01-16 15:00:44 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:47.094581 | orchestrator | 2025-01-16 15:00:44 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:47.094596 | orchestrator | 2025-01-16 15:00:44 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:47.094611 | orchestrator | 2025-01-16 15:00:44 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:47.094710 | orchestrator | 2025-01-16 15:00:47 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:47.094733 | orchestrator | 2025-01-16 15:00:47 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:47.094807 | orchestrator | 2025-01-16 15:00:47 | INFO  | Task 5986cec7-7083-42cb-9ee8-d91b3736b4d2 is in state STARTED 2025-01-16 15:00:47.097062 | orchestrator | 2025-01-16 15:00:47 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:47.101447 | orchestrator | 2025-01-16 15:00:47 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:47.101934 | orchestrator | 2025-01-16 15:00:47 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:50.150397 | orchestrator | 2025-01-16 15:00:47 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:50.150570 | orchestrator | 2025-01-16 15:00:50 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:50.152078 | orchestrator | 2025-01-16 15:00:50 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:50.152174 | orchestrator | 2025-01-16 15:00:50 | INFO  | Task 5986cec7-7083-42cb-9ee8-d91b3736b4d2 is in state STARTED 2025-01-16 15:00:50.152207 | orchestrator | 2025-01-16 15:00:50 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:53.190632 | orchestrator | 2025-01-16 15:00:50 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:53.190749 | orchestrator | 2025-01-16 15:00:50 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:53.190823 | orchestrator | 2025-01-16 15:00:50 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:53.190852 | orchestrator | 2025-01-16 15:00:53 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:00:53.191179 | orchestrator | 2025-01-16 15:00:53 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:53.191204 | orchestrator | 2025-01-16 15:00:53 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:53.191216 | orchestrator | 2025-01-16 15:00:53 | INFO  | Task 5986cec7-7083-42cb-9ee8-d91b3736b4d2 is in state SUCCESS 2025-01-16 15:00:53.191234 | orchestrator | 2025-01-16 15:00:53 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:53.191540 | orchestrator | 2025-01-16 15:00:53 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:53.192036 | orchestrator | 2025-01-16 15:00:53 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:56.225376 | orchestrator | 2025-01-16 15:00:53 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:56.225506 | orchestrator | 2025-01-16 15:00:56 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:00:56.227473 | orchestrator | 2025-01-16 15:00:56 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:56.227688 | orchestrator | 2025-01-16 15:00:56 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:56.228911 | orchestrator | 2025-01-16 15:00:56 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:56.228989 | orchestrator | 2025-01-16 15:00:56 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:56.229510 | orchestrator | 2025-01-16 15:00:56 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:00:59.259873 | orchestrator | 2025-01-16 15:00:56 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:00:59.260022 | orchestrator | 2025-01-16 15:00:59 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:00:59.260444 | orchestrator | 2025-01-16 15:00:59 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:00:59.260458 | orchestrator | 2025-01-16 15:00:59 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:00:59.260465 | orchestrator | 2025-01-16 15:00:59 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:00:59.260476 | orchestrator | 2025-01-16 15:00:59 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:00:59.260900 | orchestrator | 2025-01-16 15:00:59 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:02.293479 | orchestrator | 2025-01-16 15:00:59 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:02.293624 | orchestrator | 2025-01-16 15:01:02 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:02.294448 | orchestrator | 2025-01-16 15:01:02 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:02.297303 | orchestrator | 2025-01-16 15:01:02 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:02.299176 | orchestrator | 2025-01-16 15:01:02 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state STARTED 2025-01-16 15:01:02.301461 | orchestrator | 2025-01-16 15:01:02 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:02.301521 | orchestrator | 2025-01-16 15:01:02 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:05.328874 | orchestrator | 2025-01-16 15:01:02 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:05.329045 | orchestrator | 2025-01-16 15:01:05 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:05.329665 | orchestrator | 2025-01-16 15:01:05 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:05.331017 | orchestrator | 2025-01-16 15:01:05.331776 | orchestrator | 2025-01-16 15:01:05.331826 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:01:05.331850 | orchestrator | 2025-01-16 15:01:05.331872 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:01:05.331916 | orchestrator | Thursday 16 January 2025 15:00:35 +0000 (0:00:00.650) 0:00:00.650 ****** 2025-01-16 15:01:05.331938 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:01:05.331965 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:01:05.331987 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:01:05.332009 | orchestrator | 2025-01-16 15:01:05.332032 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:01:05.332052 | orchestrator | Thursday 16 January 2025 15:00:35 +0000 (0:00:00.705) 0:00:01.356 ****** 2025-01-16 15:01:05.332075 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-01-16 15:01:05.332098 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-01-16 15:01:05.332120 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-01-16 15:01:05.332142 | orchestrator | 2025-01-16 15:01:05.332162 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-01-16 15:01:05.332182 | orchestrator | 2025-01-16 15:01:05.332203 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-01-16 15:01:05.332225 | orchestrator | Thursday 16 January 2025 15:00:36 +0000 (0:00:00.342) 0:00:01.699 ****** 2025-01-16 15:01:05.332248 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:01:05.332272 | orchestrator | 2025-01-16 15:01:05.332294 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-01-16 15:01:05.332316 | orchestrator | Thursday 16 January 2025 15:00:37 +0000 (0:00:01.439) 0:00:03.138 ****** 2025-01-16 15:01:05.332340 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-01-16 15:01:05.332364 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-01-16 15:01:05.332390 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-01-16 15:01:05.332413 | orchestrator | 2025-01-16 15:01:05.332436 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-01-16 15:01:05.332461 | orchestrator | Thursday 16 January 2025 15:00:38 +0000 (0:00:00.667) 0:00:03.806 ****** 2025-01-16 15:01:05.332483 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-01-16 15:01:05.332536 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-01-16 15:01:05.332559 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-01-16 15:01:05.332582 | orchestrator | 2025-01-16 15:01:05.332604 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-01-16 15:01:05.332626 | orchestrator | Thursday 16 January 2025 15:00:40 +0000 (0:00:02.070) 0:00:05.876 ****** 2025-01-16 15:01:05.332649 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:01:05.332680 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:01:05.332701 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:01:05.332719 | orchestrator | 2025-01-16 15:01:05.332740 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-01-16 15:01:05.332801 | orchestrator | Thursday 16 January 2025 15:00:43 +0000 (0:00:03.215) 0:00:09.092 ****** 2025-01-16 15:01:05.332824 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:01:05.332846 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:01:05.332868 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:01:05.332890 | orchestrator | 2025-01-16 15:01:05.332912 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:01:05.332933 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:01:05.332958 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:01:05.332981 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:01:05.333003 | orchestrator | 2025-01-16 15:01:05.333024 | orchestrator | 2025-01-16 15:01:05.333042 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:01:05.333065 | orchestrator | Thursday 16 January 2025 15:00:50 +0000 (0:00:07.030) 0:00:16.122 ****** 2025-01-16 15:01:05.333086 | orchestrator | =============================================================================== 2025-01-16 15:01:05.333108 | orchestrator | memcached : Restart memcached container --------------------------------- 7.03s 2025-01-16 15:01:05.333129 | orchestrator | memcached : Check memcached container ----------------------------------- 3.22s 2025-01-16 15:01:05.333151 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.07s 2025-01-16 15:01:05.333173 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.44s 2025-01-16 15:01:05.333191 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2025-01-16 15:01:05.333211 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.67s 2025-01-16 15:01:05.333233 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-01-16 15:01:05.333255 | orchestrator | 2025-01-16 15:01:05.333278 | orchestrator | 2025-01-16 15:01:05 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:05.333300 | orchestrator | 2025-01-16 15:01:05 | INFO  | Task 5283827f-5507-4696-a3bb-2b0d5a133d6a is in state SUCCESS 2025-01-16 15:01:05.333335 | orchestrator | 2025-01-16 15:01:05.333357 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:01:05.333378 | orchestrator | 2025-01-16 15:01:05.333399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:01:05.333422 | orchestrator | Thursday 16 January 2025 15:00:34 +0000 (0:00:00.443) 0:00:00.443 ****** 2025-01-16 15:01:05.333444 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:01:05.333468 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:01:05.333488 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:01:05.333508 | orchestrator | 2025-01-16 15:01:05.333530 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:01:05.333551 | orchestrator | Thursday 16 January 2025 15:00:34 +0000 (0:00:00.560) 0:00:01.004 ****** 2025-01-16 15:01:05.333574 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-01-16 15:01:05.333611 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-01-16 15:01:05.333642 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-01-16 15:01:05.333662 | orchestrator | 2025-01-16 15:01:05.333681 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-01-16 15:01:05.333703 | orchestrator | 2025-01-16 15:01:05.333726 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-01-16 15:01:05.333776 | orchestrator | Thursday 16 January 2025 15:00:35 +0000 (0:00:00.646) 0:00:01.650 ****** 2025-01-16 15:01:05.333800 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:01:05.333824 | orchestrator | 2025-01-16 15:01:05.333845 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-01-16 15:01:05.333868 | orchestrator | Thursday 16 January 2025 15:00:37 +0000 (0:00:01.540) 0:00:03.191 ****** 2025-01-16 15:01:05.333893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.333921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.333943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.333965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334141 | orchestrator | 2025-01-16 15:01:05.334164 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-01-16 15:01:05.334187 | orchestrator | Thursday 16 January 2025 15:00:38 +0000 (0:00:01.114) 0:00:04.305 ****** 2025-01-16 15:01:05.334209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334358 | orchestrator | 2025-01-16 15:01:05.334376 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-01-16 15:01:05.334395 | orchestrator | Thursday 16 January 2025 15:00:40 +0000 (0:00:02.408) 0:00:06.714 ****** 2025-01-16 15:01:05.334414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334545 | orchestrator | 2025-01-16 15:01:05.334563 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-01-16 15:01:05.334578 | orchestrator | Thursday 16 January 2025 15:00:44 +0000 (0:00:04.045) 0:00:10.760 ****** 2025-01-16 15:01:05.334594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:05.334706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-01-16 15:01:08.353983 | orchestrator | 2025-01-16 15:01:08.354134 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-01-16 15:01:08.354150 | orchestrator | Thursday 16 January 2025 15:00:46 +0000 (0:00:02.325) 0:00:13.086 ****** 2025-01-16 15:01:08.354160 | orchestrator | 2025-01-16 15:01:08.354169 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-01-16 15:01:08.354273 | orchestrator | Thursday 16 January 2025 15:00:47 +0000 (0:00:00.119) 0:00:13.205 ****** 2025-01-16 15:01:08.354286 | orchestrator | 2025-01-16 15:01:08.354296 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-01-16 15:01:08.354306 | orchestrator | Thursday 16 January 2025 15:00:47 +0000 (0:00:00.090) 0:00:13.296 ****** 2025-01-16 15:01:08.354316 | orchestrator | 2025-01-16 15:01:08.354326 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-01-16 15:01:08.354336 | orchestrator | Thursday 16 January 2025 15:00:47 +0000 (0:00:00.116) 0:00:13.413 ****** 2025-01-16 15:01:08.354346 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:01:08.354358 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:01:08.354369 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:01:08.354379 | orchestrator | 2025-01-16 15:01:08.354389 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-01-16 15:01:08.354399 | orchestrator | Thursday 16 January 2025 15:00:51 +0000 (0:00:04.223) 0:00:17.636 ****** 2025-01-16 15:01:08.354409 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:01:08.354418 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:01:08.354428 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:01:08.354454 | orchestrator | 2025-01-16 15:01:08.354463 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:01:08.354472 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:01:08.354482 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:01:08.354491 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:01:08.354499 | orchestrator | 2025-01-16 15:01:08.354507 | orchestrator | 2025-01-16 15:01:08.354516 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:01:08.354525 | orchestrator | Thursday 16 January 2025 15:01:02 +0000 (0:00:10.562) 0:00:28.198 ****** 2025-01-16 15:01:08.354533 | orchestrator | =============================================================================== 2025-01-16 15:01:08.354587 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.56s 2025-01-16 15:01:08.354599 | orchestrator | redis : Restart redis container ----------------------------------------- 4.22s 2025-01-16 15:01:08.354629 | orchestrator | redis : Copying over redis config files --------------------------------- 4.05s 2025-01-16 15:01:08.354638 | orchestrator | redis : Copying over default config.json files -------------------------- 2.41s 2025-01-16 15:01:08.354645 | orchestrator | redis : Check redis containers ------------------------------------------ 2.33s 2025-01-16 15:01:08.354653 | orchestrator | redis : include_tasks --------------------------------------------------- 1.54s 2025-01-16 15:01:08.354661 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.11s 2025-01-16 15:01:08.354668 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-01-16 15:01:08.354681 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2025-01-16 15:01:08.354689 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.33s 2025-01-16 15:01:08.354698 | orchestrator | 2025-01-16 15:01:05 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:08.354706 | orchestrator | 2025-01-16 15:01:05 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:08.354714 | orchestrator | 2025-01-16 15:01:05 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:08.354738 | orchestrator | 2025-01-16 15:01:08 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:08.356409 | orchestrator | 2025-01-16 15:01:08 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:08.356454 | orchestrator | 2025-01-16 15:01:08 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:08.356468 | orchestrator | 2025-01-16 15:01:08 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:08.356481 | orchestrator | 2025-01-16 15:01:08 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:11.393941 | orchestrator | 2025-01-16 15:01:08 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:11.394167 | orchestrator | 2025-01-16 15:01:11 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:11.394950 | orchestrator | 2025-01-16 15:01:11 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:11.394996 | orchestrator | 2025-01-16 15:01:11 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:11.395016 | orchestrator | 2025-01-16 15:01:11 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:11.395045 | orchestrator | 2025-01-16 15:01:11 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:14.430944 | orchestrator | 2025-01-16 15:01:11 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:14.431049 | orchestrator | 2025-01-16 15:01:14 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:14.431205 | orchestrator | 2025-01-16 15:01:14 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:14.431220 | orchestrator | 2025-01-16 15:01:14 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:14.431666 | orchestrator | 2025-01-16 15:01:14 | INFO  | Task 5a7da571-e027-409c-8183-7d47bd56fe58 is in state STARTED 2025-01-16 15:01:14.432339 | orchestrator | 2025-01-16 15:01:14 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:14.432598 | orchestrator | 2025-01-16 15:01:14 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:17.481154 | orchestrator | 2025-01-16 15:01:14 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:17.481330 | orchestrator | 2025-01-16 15:01:17 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:17.481484 | orchestrator | 2025-01-16 15:01:17 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:17.481509 | orchestrator | 2025-01-16 15:01:17 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:17.481523 | orchestrator | 2025-01-16 15:01:17 | INFO  | Task 5a7da571-e027-409c-8183-7d47bd56fe58 is in state STARTED 2025-01-16 15:01:17.481543 | orchestrator | 2025-01-16 15:01:17 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:17.482119 | orchestrator | 2025-01-16 15:01:17 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:20.515393 | orchestrator | 2025-01-16 15:01:17 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:20.515556 | orchestrator | 2025-01-16 15:01:20 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:20.516155 | orchestrator | 2025-01-16 15:01:20 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:20.516199 | orchestrator | 2025-01-16 15:01:20 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:20.516461 | orchestrator | 2025-01-16 15:01:20 | INFO  | Task 5a7da571-e027-409c-8183-7d47bd56fe58 is in state STARTED 2025-01-16 15:01:20.517037 | orchestrator | 2025-01-16 15:01:20 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:20.517620 | orchestrator | 2025-01-16 15:01:20 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:23.547468 | orchestrator | 2025-01-16 15:01:20 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:23.547606 | orchestrator | 2025-01-16 15:01:23 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:23.549056 | orchestrator | 2025-01-16 15:01:23 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:23.549175 | orchestrator | 2025-01-16 15:01:23 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:23.549459 | orchestrator | 2025-01-16 15:01:23 | INFO  | Task 5a7da571-e027-409c-8183-7d47bd56fe58 is in state SUCCESS 2025-01-16 15:01:23.549496 | orchestrator | 2025-01-16 15:01:23 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:23.549522 | orchestrator | 2025-01-16 15:01:23 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:26.575038 | orchestrator | 2025-01-16 15:01:23 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:26.575185 | orchestrator | 2025-01-16 15:01:26 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:26.577525 | orchestrator | 2025-01-16 15:01:26 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:26.577613 | orchestrator | 2025-01-16 15:01:26 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:26.577770 | orchestrator | 2025-01-16 15:01:26 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:26.578404 | orchestrator | 2025-01-16 15:01:26 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:29.617122 | orchestrator | 2025-01-16 15:01:26 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:29.617241 | orchestrator | 2025-01-16 15:01:29 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:29.617783 | orchestrator | 2025-01-16 15:01:29 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:29.617943 | orchestrator | 2025-01-16 15:01:29 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:29.620412 | orchestrator | 2025-01-16 15:01:29 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:29.621909 | orchestrator | 2025-01-16 15:01:29 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:32.647503 | orchestrator | 2025-01-16 15:01:29 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:32.647599 | orchestrator | 2025-01-16 15:01:32 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:32.648004 | orchestrator | 2025-01-16 15:01:32 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:32.648017 | orchestrator | 2025-01-16 15:01:32 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:32.648027 | orchestrator | 2025-01-16 15:01:32 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:32.651398 | orchestrator | 2025-01-16 15:01:32 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:35.676184 | orchestrator | 2025-01-16 15:01:32 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:35.676374 | orchestrator | 2025-01-16 15:01:35 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:35.677680 | orchestrator | 2025-01-16 15:01:35 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:35.677794 | orchestrator | 2025-01-16 15:01:35 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:35.677944 | orchestrator | 2025-01-16 15:01:35 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:35.678105 | orchestrator | 2025-01-16 15:01:35 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:38.700715 | orchestrator | 2025-01-16 15:01:35 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:38.700825 | orchestrator | 2025-01-16 15:01:38 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:38.701340 | orchestrator | 2025-01-16 15:01:38 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:38.701369 | orchestrator | 2025-01-16 15:01:38 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:38.701577 | orchestrator | 2025-01-16 15:01:38 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:38.703134 | orchestrator | 2025-01-16 15:01:38 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:41.736380 | orchestrator | 2025-01-16 15:01:38 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:41.736645 | orchestrator | 2025-01-16 15:01:41 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:44.769426 | orchestrator | 2025-01-16 15:01:41 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:44.769639 | orchestrator | 2025-01-16 15:01:41 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:44.769665 | orchestrator | 2025-01-16 15:01:41 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:44.769680 | orchestrator | 2025-01-16 15:01:41 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:44.769694 | orchestrator | 2025-01-16 15:01:41 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:44.769797 | orchestrator | 2025-01-16 15:01:44 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:44.770198 | orchestrator | 2025-01-16 15:01:44 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:44.770225 | orchestrator | 2025-01-16 15:01:44 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:44.770244 | orchestrator | 2025-01-16 15:01:44 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:44.770675 | orchestrator | 2025-01-16 15:01:44 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state STARTED 2025-01-16 15:01:47.805374 | orchestrator | 2025-01-16 15:01:44 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:47.805637 | orchestrator | 2025-01-16 15:01:47 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:47.808558 | orchestrator | 2025-01-16 15:01:47 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:47.808634 | orchestrator | 2025-01-16 15:01:47 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:47.808650 | orchestrator | 2025-01-16 15:01:47 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:01:47.808664 | orchestrator | 2025-01-16 15:01:47 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:47.808868 | orchestrator | 2025-01-16 15:01:47 | INFO  | Task 2eb39f74-a051-4dd8-b6bc-897a7c552353 is in state SUCCESS 2025-01-16 15:01:47.810342 | orchestrator | 2025-01-16 15:01:47.810394 | orchestrator | None 2025-01-16 15:01:47.810409 | orchestrator | 2025-01-16 15:01:47.810423 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:01:47.810438 | orchestrator | 2025-01-16 15:01:47.810453 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:01:47.810468 | orchestrator | Thursday 16 January 2025 15:00:35 +0000 (0:00:00.700) 0:00:00.700 ****** 2025-01-16 15:01:47.810493 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:01:47.810510 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:01:47.810524 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:01:47.810538 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:01:47.810553 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:01:47.810567 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:01:47.810581 | orchestrator | 2025-01-16 15:01:47.810596 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:01:47.810611 | orchestrator | Thursday 16 January 2025 15:00:36 +0000 (0:00:01.110) 0:00:01.810 ****** 2025-01-16 15:01:47.810625 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-01-16 15:01:47.810639 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-01-16 15:01:47.810654 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-01-16 15:01:47.810668 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-01-16 15:01:47.810682 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-01-16 15:01:47.810696 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-01-16 15:01:47.810710 | orchestrator | 2025-01-16 15:01:47.810755 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-01-16 15:01:47.810770 | orchestrator | 2025-01-16 15:01:47.810784 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-01-16 15:01:47.810798 | orchestrator | Thursday 16 January 2025 15:00:37 +0000 (0:00:01.452) 0:00:03.263 ****** 2025-01-16 15:01:47.810814 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:01:47.810850 | orchestrator | 2025-01-16 15:01:47.810865 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-01-16 15:01:47.810879 | orchestrator | Thursday 16 January 2025 15:00:39 +0000 (0:00:01.695) 0:00:04.958 ****** 2025-01-16 15:01:47.810894 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-01-16 15:01:47.810910 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-01-16 15:01:47.810925 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-01-16 15:01:47.810941 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-01-16 15:01:47.810956 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-01-16 15:01:47.810972 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-01-16 15:01:47.810988 | orchestrator | 2025-01-16 15:01:47.811003 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-01-16 15:01:47.811019 | orchestrator | Thursday 16 January 2025 15:00:40 +0000 (0:00:01.265) 0:00:06.224 ****** 2025-01-16 15:01:47.811034 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-01-16 15:01:47.811049 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-01-16 15:01:47.811065 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-01-16 15:01:47.811080 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-01-16 15:01:47.811096 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-01-16 15:01:47.811110 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-01-16 15:01:47.811126 | orchestrator | 2025-01-16 15:01:47.811142 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-01-16 15:01:47.811158 | orchestrator | Thursday 16 January 2025 15:00:44 +0000 (0:00:03.545) 0:00:09.769 ****** 2025-01-16 15:01:47.811173 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-01-16 15:01:47.811188 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:01:47.811205 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-01-16 15:01:47.811221 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:01:47.811236 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-01-16 15:01:47.811252 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:01:47.811269 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-01-16 15:01:47.811284 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:01:47.811298 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-01-16 15:01:47.811312 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:01:47.811326 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-01-16 15:01:47.811340 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:01:47.811354 | orchestrator | 2025-01-16 15:01:47.811368 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-01-16 15:01:47.811382 | orchestrator | Thursday 16 January 2025 15:00:46 +0000 (0:00:02.459) 0:00:12.228 ****** 2025-01-16 15:01:47.811396 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:01:47.811410 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:01:47.811424 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:01:47.811438 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:01:47.811452 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:01:47.811466 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:01:47.811480 | orchestrator | 2025-01-16 15:01:47.811494 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-01-16 15:01:47.811508 | orchestrator | Thursday 16 January 2025 15:00:48 +0000 (0:00:01.806) 0:00:14.035 ****** 2025-01-16 15:01:47.811536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811795 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811818 | orchestrator | 2025-01-16 15:01:47.811833 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-01-16 15:01:47.811848 | orchestrator | Thursday 16 January 2025 15:00:52 +0000 (0:00:03.747) 0:00:17.783 ****** 2025-01-16 15:01:47.811862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.811992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812022 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812061 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812104 | orchestrator | 2025-01-16 15:01:47.812118 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-01-16 15:01:47.812132 | orchestrator | Thursday 16 January 2025 15:00:56 +0000 (0:00:04.376) 0:00:22.159 ****** 2025-01-16 15:01:47.812147 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:01:47.812161 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:01:47.812175 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:01:47.812189 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:01:47.812203 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:01:47.812217 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:01:47.812231 | orchestrator | 2025-01-16 15:01:47.812245 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-01-16 15:01:47.812259 | orchestrator | Thursday 16 January 2025 15:00:58 +0000 (0:00:02.320) 0:00:24.480 ****** 2025-01-16 15:01:47.812273 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:01:47.812287 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:01:47.812300 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:01:47.812315 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:01:47.812329 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:01:47.812343 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:01:47.812357 | orchestrator | 2025-01-16 15:01:47.812376 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-01-16 15:01:47.812390 | orchestrator | Thursday 16 January 2025 15:01:00 +0000 (0:00:01.918) 0:00:26.399 ****** 2025-01-16 15:01:47.812404 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:01:47.812418 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:01:47.812432 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:01:47.812446 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:01:47.812460 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:01:47.812474 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:01:47.812488 | orchestrator | 2025-01-16 15:01:47.812502 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-01-16 15:01:47.812516 | orchestrator | Thursday 16 January 2025 15:01:02 +0000 (0:00:01.729) 0:00:28.128 ****** 2025-01-16 15:01:47.812530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812612 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-01-16 15:01:47.812841 | orchestrator | 2025-01-16 15:01:47.812856 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-01-16 15:01:47.812871 | orchestrator | Thursday 16 January 2025 15:01:05 +0000 (0:00:02.932) 0:00:31.060 ****** 2025-01-16 15:01:47.812885 | orchestrator | 2025-01-16 15:01:47.812899 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-01-16 15:01:47.812913 | orchestrator | Thursday 16 January 2025 15:01:05 +0000 (0:00:00.304) 0:00:31.365 ****** 2025-01-16 15:01:47.812927 | orchestrator | 2025-01-16 15:01:47.812941 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-01-16 15:01:47.812955 | orchestrator | Thursday 16 January 2025 15:01:06 +0000 (0:00:00.579) 0:00:31.945 ****** 2025-01-16 15:01:47.812968 | orchestrator | 2025-01-16 15:01:47.812982 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-01-16 15:01:47.812996 | orchestrator | Thursday 16 January 2025 15:01:06 +0000 (0:00:00.209) 0:00:32.155 ****** 2025-01-16 15:01:47.813010 | orchestrator | 2025-01-16 15:01:47.813024 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-01-16 15:01:47.813038 | orchestrator | Thursday 16 January 2025 15:01:07 +0000 (0:00:00.625) 0:00:32.781 ****** 2025-01-16 15:01:47.813051 | orchestrator | 2025-01-16 15:01:47.813065 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-01-16 15:01:47.813079 | orchestrator | Thursday 16 January 2025 15:01:07 +0000 (0:00:00.228) 0:00:33.009 ****** 2025-01-16 15:01:47.813093 | orchestrator | 2025-01-16 15:01:47.813107 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-01-16 15:01:47.813121 | orchestrator | Thursday 16 January 2025 15:01:07 +0000 (0:00:00.261) 0:00:33.270 ****** 2025-01-16 15:01:47.813134 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:01:47.813148 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:01:47.813162 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:01:47.813176 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:01:47.813190 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:01:47.813204 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:01:47.813217 | orchestrator | 2025-01-16 15:01:47.813231 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-01-16 15:01:47.813245 | orchestrator | Thursday 16 January 2025 15:01:16 +0000 (0:00:08.911) 0:00:42.182 ****** 2025-01-16 15:01:47.813259 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:01:47.813273 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:01:47.813287 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:01:47.813301 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:01:47.813315 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:01:47.813329 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:01:47.813343 | orchestrator | 2025-01-16 15:01:47.813364 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-01-16 15:01:47.813379 | orchestrator | Thursday 16 January 2025 15:01:18 +0000 (0:00:02.118) 0:00:44.301 ****** 2025-01-16 15:01:47.813393 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:01:47.813407 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:01:47.813421 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:01:47.813436 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:01:47.813459 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:01:47.813475 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:01:47.813489 | orchestrator | 2025-01-16 15:01:47.813503 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-01-16 15:01:47.813517 | orchestrator | Thursday 16 January 2025 15:01:27 +0000 (0:00:08.970) 0:00:53.272 ****** 2025-01-16 15:01:47.813531 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-01-16 15:01:47.813546 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-01-16 15:01:47.813560 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-01-16 15:01:47.813581 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-01-16 15:01:47.813600 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-01-16 15:01:47.813615 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-01-16 15:01:47.813629 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-01-16 15:01:47.813644 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-01-16 15:01:47.813657 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-01-16 15:01:47.813672 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-01-16 15:01:47.813686 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-01-16 15:01:47.813699 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-01-16 15:01:47.813714 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-01-16 15:01:47.813759 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-01-16 15:01:47.813784 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-01-16 15:01:47.813807 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-01-16 15:01:47.813826 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-01-16 15:01:47.813840 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-01-16 15:01:47.813854 | orchestrator | 2025-01-16 15:01:47.813868 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-01-16 15:01:47.813882 | orchestrator | Thursday 16 January 2025 15:01:32 +0000 (0:00:04.898) 0:00:58.170 ****** 2025-01-16 15:01:47.813896 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-01-16 15:01:47.813910 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:01:47.813924 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-01-16 15:01:47.813938 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:01:47.813952 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-01-16 15:01:47.813966 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:01:47.813980 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-01-16 15:01:47.813994 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-01-16 15:01:47.814008 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-01-16 15:01:47.814067 | orchestrator | 2025-01-16 15:01:47.814081 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-01-16 15:01:47.814096 | orchestrator | Thursday 16 January 2025 15:01:34 +0000 (0:00:01.668) 0:00:59.838 ****** 2025-01-16 15:01:47.814110 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-01-16 15:01:47.814124 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:01:47.814138 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-01-16 15:01:47.814152 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:01:47.814170 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-01-16 15:01:47.814193 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:01:47.814212 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-01-16 15:01:47.814234 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-01-16 15:01:47.814248 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-01-16 15:01:47.814262 | orchestrator | 2025-01-16 15:01:47.814276 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-01-16 15:01:47.814290 | orchestrator | Thursday 16 January 2025 15:01:37 +0000 (0:00:02.855) 0:01:02.694 ****** 2025-01-16 15:01:47.814312 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:01:50.831928 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:01:50.832040 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:01:50.832056 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:01:50.832064 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:01:50.832081 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:01:50.832086 | orchestrator | 2025-01-16 15:01:50.832094 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:01:50.832100 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 15:01:50.832107 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 15:01:50.832112 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 15:01:50.832117 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 15:01:50.832122 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 15:01:50.832141 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 15:01:50.832146 | orchestrator | 2025-01-16 15:01:50.832151 | orchestrator | 2025-01-16 15:01:50.832156 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:01:50.832164 | orchestrator | Thursday 16 January 2025 15:01:44 +0000 (0:00:07.304) 0:01:09.998 ****** 2025-01-16 15:01:50.832169 | orchestrator | =============================================================================== 2025-01-16 15:01:50.832174 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.27s 2025-01-16 15:01:50.832179 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.91s 2025-01-16 15:01:50.832183 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 4.90s 2025-01-16 15:01:50.832189 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.38s 2025-01-16 15:01:50.832194 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.75s 2025-01-16 15:01:50.832199 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.55s 2025-01-16 15:01:50.832204 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.93s 2025-01-16 15:01:50.832209 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.86s 2025-01-16 15:01:50.832214 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.46s 2025-01-16 15:01:50.832219 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.32s 2025-01-16 15:01:50.832224 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.21s 2025-01-16 15:01:50.832229 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.12s 2025-01-16 15:01:50.832234 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 1.92s 2025-01-16 15:01:50.832240 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.81s 2025-01-16 15:01:50.832244 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.73s 2025-01-16 15:01:50.832266 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.70s 2025-01-16 15:01:50.832271 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 1.67s 2025-01-16 15:01:50.832276 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.45s 2025-01-16 15:01:50.832281 | orchestrator | module-load : Load modules ---------------------------------------------- 1.27s 2025-01-16 15:01:50.832286 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.11s 2025-01-16 15:01:50.832291 | orchestrator | 2025-01-16 15:01:47 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:50.832308 | orchestrator | 2025-01-16 15:01:50 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:50.833338 | orchestrator | 2025-01-16 15:01:50 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:50.835184 | orchestrator | 2025-01-16 15:01:50 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:50.835862 | orchestrator | 2025-01-16 15:01:50 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:01:50.835891 | orchestrator | 2025-01-16 15:01:50 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:53.857032 | orchestrator | 2025-01-16 15:01:50 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:53.857353 | orchestrator | 2025-01-16 15:01:53 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:56.880366 | orchestrator | 2025-01-16 15:01:53 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:56.880495 | orchestrator | 2025-01-16 15:01:53 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:56.880515 | orchestrator | 2025-01-16 15:01:53 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:01:56.880531 | orchestrator | 2025-01-16 15:01:53 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:56.880546 | orchestrator | 2025-01-16 15:01:53 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:56.880579 | orchestrator | 2025-01-16 15:01:56 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:59.900930 | orchestrator | 2025-01-16 15:01:56 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:59.901176 | orchestrator | 2025-01-16 15:01:56 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:59.901207 | orchestrator | 2025-01-16 15:01:56 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:01:59.901222 | orchestrator | 2025-01-16 15:01:56 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:59.901235 | orchestrator | 2025-01-16 15:01:56 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:01:59.901265 | orchestrator | 2025-01-16 15:01:59 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:01:59.901742 | orchestrator | 2025-01-16 15:01:59 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:01:59.901794 | orchestrator | 2025-01-16 15:01:59 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:01:59.902074 | orchestrator | 2025-01-16 15:01:59 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:01:59.902609 | orchestrator | 2025-01-16 15:01:59 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:01:59.903789 | orchestrator | 2025-01-16 15:01:59 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:02.928552 | orchestrator | 2025-01-16 15:02:02 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:05.957527 | orchestrator | 2025-01-16 15:02:02 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:05.957823 | orchestrator | 2025-01-16 15:02:02 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:05.957855 | orchestrator | 2025-01-16 15:02:02 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:05.957874 | orchestrator | 2025-01-16 15:02:02 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:05.957893 | orchestrator | 2025-01-16 15:02:02 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:05.957933 | orchestrator | 2025-01-16 15:02:05 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:05.958464 | orchestrator | 2025-01-16 15:02:05 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:05.958489 | orchestrator | 2025-01-16 15:02:05 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:05.958508 | orchestrator | 2025-01-16 15:02:05 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:05.959025 | orchestrator | 2025-01-16 15:02:05 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:05.959523 | orchestrator | 2025-01-16 15:02:05 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:08.982284 | orchestrator | 2025-01-16 15:02:08 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:08.982690 | orchestrator | 2025-01-16 15:02:08 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:08.982770 | orchestrator | 2025-01-16 15:02:08 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:08.982783 | orchestrator | 2025-01-16 15:02:08 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:08.982802 | orchestrator | 2025-01-16 15:02:08 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:12.003791 | orchestrator | 2025-01-16 15:02:08 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:12.003951 | orchestrator | 2025-01-16 15:02:11 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:12.006249 | orchestrator | 2025-01-16 15:02:11 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:12.006759 | orchestrator | 2025-01-16 15:02:12 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:12.009855 | orchestrator | 2025-01-16 15:02:12 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:15.033444 | orchestrator | 2025-01-16 15:02:12 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:15.033690 | orchestrator | 2025-01-16 15:02:12 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:15.033761 | orchestrator | 2025-01-16 15:02:15 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:15.034380 | orchestrator | 2025-01-16 15:02:15 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:15.034474 | orchestrator | 2025-01-16 15:02:15 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:15.034676 | orchestrator | 2025-01-16 15:02:15 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:15.035125 | orchestrator | 2025-01-16 15:02:15 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:15.035200 | orchestrator | 2025-01-16 15:02:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:18.060146 | orchestrator | 2025-01-16 15:02:18 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:18.060537 | orchestrator | 2025-01-16 15:02:18 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:18.060568 | orchestrator | 2025-01-16 15:02:18 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:18.060589 | orchestrator | 2025-01-16 15:02:18 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:18.061581 | orchestrator | 2025-01-16 15:02:18 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:21.089191 | orchestrator | 2025-01-16 15:02:18 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:21.089326 | orchestrator | 2025-01-16 15:02:21 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:21.091511 | orchestrator | 2025-01-16 15:02:21 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:21.091915 | orchestrator | 2025-01-16 15:02:21 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:21.092538 | orchestrator | 2025-01-16 15:02:21 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:21.093001 | orchestrator | 2025-01-16 15:02:21 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:21.097195 | orchestrator | 2025-01-16 15:02:21 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:24.123777 | orchestrator | 2025-01-16 15:02:24 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:24.123908 | orchestrator | 2025-01-16 15:02:24 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:24.123923 | orchestrator | 2025-01-16 15:02:24 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:24.127345 | orchestrator | 2025-01-16 15:02:24 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:24.127692 | orchestrator | 2025-01-16 15:02:24 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:27.170541 | orchestrator | 2025-01-16 15:02:24 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:27.170771 | orchestrator | 2025-01-16 15:02:27 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:27.174403 | orchestrator | 2025-01-16 15:02:27 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:27.174474 | orchestrator | 2025-01-16 15:02:27 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:27.174513 | orchestrator | 2025-01-16 15:02:27 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:27.175157 | orchestrator | 2025-01-16 15:02:27 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:27.175370 | orchestrator | 2025-01-16 15:02:27 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:30.205951 | orchestrator | 2025-01-16 15:02:30 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:30.206216 | orchestrator | 2025-01-16 15:02:30 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:30.206268 | orchestrator | 2025-01-16 15:02:30 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:30.206620 | orchestrator | 2025-01-16 15:02:30 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:30.207168 | orchestrator | 2025-01-16 15:02:30 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:33.236314 | orchestrator | 2025-01-16 15:02:30 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:33.236516 | orchestrator | 2025-01-16 15:02:33 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:33.236801 | orchestrator | 2025-01-16 15:02:33 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:33.236830 | orchestrator | 2025-01-16 15:02:33 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:33.236847 | orchestrator | 2025-01-16 15:02:33 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:33.237463 | orchestrator | 2025-01-16 15:02:33 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:36.275077 | orchestrator | 2025-01-16 15:02:33 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:36.275344 | orchestrator | 2025-01-16 15:02:36 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:36.275632 | orchestrator | 2025-01-16 15:02:36 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:36.275670 | orchestrator | 2025-01-16 15:02:36 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:36.276165 | orchestrator | 2025-01-16 15:02:36 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:36.282358 | orchestrator | 2025-01-16 15:02:36 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:39.310298 | orchestrator | 2025-01-16 15:02:36 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:39.310542 | orchestrator | 2025-01-16 15:02:39 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state STARTED 2025-01-16 15:02:39.311101 | orchestrator | 2025-01-16 15:02:39 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:39.311147 | orchestrator | 2025-01-16 15:02:39 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:39.311172 | orchestrator | 2025-01-16 15:02:39 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:39.311564 | orchestrator | 2025-01-16 15:02:39 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:39.311667 | orchestrator | 2025-01-16 15:02:39 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:42.350511 | orchestrator | 2025-01-16 15:02:42.350625 | orchestrator | 2025-01-16 15:02:42.350646 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-01-16 15:02:42.350662 | orchestrator | 2025-01-16 15:02:42.350675 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-01-16 15:02:42.350736 | orchestrator | Thursday 16 January 2025 15:00:58 +0000 (0:00:00.117) 0:00:00.117 ****** 2025-01-16 15:02:42.350750 | orchestrator | ok: [localhost] => { 2025-01-16 15:02:42.350764 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-01-16 15:02:42.350778 | orchestrator | } 2025-01-16 15:02:42.350790 | orchestrator | 2025-01-16 15:02:42.350803 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-01-16 15:02:42.350816 | orchestrator | Thursday 16 January 2025 15:00:58 +0000 (0:00:00.262) 0:00:00.379 ****** 2025-01-16 15:02:42.350856 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-01-16 15:02:42.350871 | orchestrator | ...ignoring 2025-01-16 15:02:42.350884 | orchestrator | 2025-01-16 15:02:42.350897 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-01-16 15:02:42.350910 | orchestrator | Thursday 16 January 2025 15:01:00 +0000 (0:00:02.672) 0:00:03.052 ****** 2025-01-16 15:02:42.350924 | orchestrator | skipping: [localhost] 2025-01-16 15:02:42.350937 | orchestrator | 2025-01-16 15:02:42.350950 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-01-16 15:02:42.350963 | orchestrator | Thursday 16 January 2025 15:01:01 +0000 (0:00:00.223) 0:00:03.276 ****** 2025-01-16 15:02:42.350976 | orchestrator | ok: [localhost] 2025-01-16 15:02:42.350989 | orchestrator | 2025-01-16 15:02:42.351002 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:02:42.351015 | orchestrator | 2025-01-16 15:02:42.351042 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:02:42.351055 | orchestrator | Thursday 16 January 2025 15:01:01 +0000 (0:00:00.337) 0:00:03.614 ****** 2025-01-16 15:02:42.351070 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:02:42.351084 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:02:42.351098 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:02:42.351112 | orchestrator | 2025-01-16 15:02:42.351126 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:02:42.351141 | orchestrator | Thursday 16 January 2025 15:01:02 +0000 (0:00:00.921) 0:00:04.535 ****** 2025-01-16 15:02:42.351155 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-01-16 15:02:42.351169 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-01-16 15:02:42.351183 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-01-16 15:02:42.351197 | orchestrator | 2025-01-16 15:02:42.351212 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-01-16 15:02:42.351226 | orchestrator | 2025-01-16 15:02:42.351240 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-01-16 15:02:42.351253 | orchestrator | Thursday 16 January 2025 15:01:03 +0000 (0:00:01.129) 0:00:05.664 ****** 2025-01-16 15:02:42.351268 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:02:42.351282 | orchestrator | 2025-01-16 15:02:42.351296 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-01-16 15:02:42.351310 | orchestrator | Thursday 16 January 2025 15:01:05 +0000 (0:00:01.802) 0:00:07.467 ****** 2025-01-16 15:02:42.351324 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:02:42.351442 | orchestrator | 2025-01-16 15:02:42.351460 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-01-16 15:02:42.351473 | orchestrator | Thursday 16 January 2025 15:01:06 +0000 (0:00:01.356) 0:00:08.824 ****** 2025-01-16 15:02:42.351486 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:02:42.351499 | orchestrator | 2025-01-16 15:02:42.351512 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-01-16 15:02:42.351525 | orchestrator | Thursday 16 January 2025 15:01:07 +0000 (0:00:00.729) 0:00:09.553 ****** 2025-01-16 15:02:42.351537 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:02:42.351550 | orchestrator | 2025-01-16 15:02:42.351563 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-01-16 15:02:42.351575 | orchestrator | Thursday 16 January 2025 15:01:07 +0000 (0:00:00.504) 0:00:10.057 ****** 2025-01-16 15:02:42.351588 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:02:42.351600 | orchestrator | 2025-01-16 15:02:42.351613 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-01-16 15:02:42.351625 | orchestrator | Thursday 16 January 2025 15:01:09 +0000 (0:00:01.099) 0:00:11.157 ****** 2025-01-16 15:02:42.351648 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:02:42.351661 | orchestrator | 2025-01-16 15:02:42.351673 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-01-16 15:02:42.351721 | orchestrator | Thursday 16 January 2025 15:01:10 +0000 (0:00:01.816) 0:00:12.974 ****** 2025-01-16 15:02:42.351735 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:02:42.351748 | orchestrator | 2025-01-16 15:02:42.351761 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-01-16 15:02:42.351773 | orchestrator | Thursday 16 January 2025 15:01:12 +0000 (0:00:01.392) 0:00:14.366 ****** 2025-01-16 15:02:42.351786 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:02:42.351798 | orchestrator | 2025-01-16 15:02:42.351811 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-01-16 15:02:42.351823 | orchestrator | Thursday 16 January 2025 15:01:13 +0000 (0:00:00.874) 0:00:15.241 ****** 2025-01-16 15:02:42.351836 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:02:42.351848 | orchestrator | 2025-01-16 15:02:42.351861 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-01-16 15:02:42.351873 | orchestrator | Thursday 16 January 2025 15:01:13 +0000 (0:00:00.352) 0:00:15.593 ****** 2025-01-16 15:02:42.351886 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:02:42.351898 | orchestrator | 2025-01-16 15:02:42.351922 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-01-16 15:02:42.351935 | orchestrator | Thursday 16 January 2025 15:01:14 +0000 (0:00:00.581) 0:00:16.175 ****** 2025-01-16 15:02:42.351950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:02:42.351968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:02:42.351982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:02:42.352034 | orchestrator | 2025-01-16 15:02:42.352048 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-01-16 15:02:42.352061 | orchestrator | Thursday 16 January 2025 15:01:15 +0000 (0:00:01.544) 0:00:17.720 ****** 2025-01-16 15:02:42.352085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:02:42.352100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:02:42.352115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:02:42.352135 | orchestrator | 2025-01-16 15:02:42.352150 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-01-16 15:02:42.352164 | orchestrator | Thursday 16 January 2025 15:01:18 +0000 (0:00:03.159) 0:00:20.879 ****** 2025-01-16 15:02:42.352178 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-01-16 15:02:42.352192 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-01-16 15:02:42.352207 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-01-16 15:02:42.352221 | orchestrator | 2025-01-16 15:02:42.352240 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-01-16 15:02:42.352255 | orchestrator | Thursday 16 January 2025 15:01:20 +0000 (0:00:02.181) 0:00:23.061 ****** 2025-01-16 15:02:42.352269 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-01-16 15:02:42.352284 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-01-16 15:02:42.352302 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-01-16 15:02:42.352317 | orchestrator | 2025-01-16 15:02:42.352331 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-01-16 15:02:42.352345 | orchestrator | Thursday 16 January 2025 15:01:23 +0000 (0:00:02.517) 0:00:25.578 ****** 2025-01-16 15:02:42.352358 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-01-16 15:02:42.352370 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-01-16 15:02:42.352383 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-01-16 15:02:42.352395 | orchestrator | 2025-01-16 15:02:42.352408 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-01-16 15:02:42.352426 | orchestrator | Thursday 16 January 2025 15:01:24 +0000 (0:00:01.259) 0:00:26.837 ****** 2025-01-16 15:02:42.352439 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-01-16 15:02:42.352451 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-01-16 15:02:42.352464 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-01-16 15:02:42.352476 | orchestrator | 2025-01-16 15:02:42.352489 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-01-16 15:02:42.352501 | orchestrator | Thursday 16 January 2025 15:01:26 +0000 (0:00:01.573) 0:00:28.411 ****** 2025-01-16 15:02:42.352514 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-01-16 15:02:42.352526 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-01-16 15:02:42.352538 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-01-16 15:02:42.352551 | orchestrator | 2025-01-16 15:02:42.352563 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-01-16 15:02:42.352576 | orchestrator | Thursday 16 January 2025 15:01:27 +0000 (0:00:00.966) 0:00:29.377 ****** 2025-01-16 15:02:42.352588 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-01-16 15:02:42.352600 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-01-16 15:02:42.352613 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-01-16 15:02:42.352625 | orchestrator | 2025-01-16 15:02:42.352644 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-01-16 15:02:42.352657 | orchestrator | Thursday 16 January 2025 15:01:28 +0000 (0:00:01.175) 0:00:30.553 ****** 2025-01-16 15:02:42.352669 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:02:42.352682 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:02:42.352711 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:02:42.352723 | orchestrator | 2025-01-16 15:02:42.352736 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-01-16 15:02:42.352749 | orchestrator | Thursday 16 January 2025 15:01:29 +0000 (0:00:00.714) 0:00:31.268 ****** 2025-01-16 15:02:42.352762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:02:42.352787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:02:42.352809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:02:42.352823 | orchestrator | 2025-01-16 15:02:42.352836 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-01-16 15:02:42.352856 | orchestrator | Thursday 16 January 2025 15:01:30 +0000 (0:00:00.953) 0:00:32.221 ****** 2025-01-16 15:02:42.352869 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:02:42.352881 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:02:42.352894 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:02:42.352906 | orchestrator | 2025-01-16 15:02:42.352919 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-01-16 15:02:42.352932 | orchestrator | Thursday 16 January 2025 15:01:30 +0000 (0:00:00.652) 0:00:32.873 ****** 2025-01-16 15:02:42.352944 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:02:42.352957 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:02:42.352969 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:02:42.353047 | orchestrator | 2025-01-16 15:02:42.353061 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-01-16 15:02:42.353074 | orchestrator | Thursday 16 January 2025 15:01:33 +0000 (0:00:02.997) 0:00:35.871 ****** 2025-01-16 15:02:42.353087 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:02:42.353099 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:02:42.353112 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:02:42.353124 | orchestrator | 2025-01-16 15:02:42.353137 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-01-16 15:02:42.353149 | orchestrator | 2025-01-16 15:02:42.353162 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-01-16 15:02:42.353174 | orchestrator | Thursday 16 January 2025 15:01:34 +0000 (0:00:00.544) 0:00:36.415 ****** 2025-01-16 15:02:42.353187 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:02:42.353199 | orchestrator | 2025-01-16 15:02:42.353212 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-01-16 15:02:42.353224 | orchestrator | Thursday 16 January 2025 15:01:34 +0000 (0:00:00.400) 0:00:36.816 ****** 2025-01-16 15:02:42.353237 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:02:42.353250 | orchestrator | 2025-01-16 15:02:42.353262 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-01-16 15:02:42.353274 | orchestrator | Thursday 16 January 2025 15:01:35 +0000 (0:00:00.370) 0:00:37.186 ****** 2025-01-16 15:02:42.353287 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:02:42.353299 | orchestrator | 2025-01-16 15:02:42.353312 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-01-16 15:02:42.353324 | orchestrator | Thursday 16 January 2025 15:01:41 +0000 (0:00:06.538) 0:00:43.725 ****** 2025-01-16 15:02:42.353337 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:02:42.353349 | orchestrator | 2025-01-16 15:02:42.353366 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-01-16 15:02:42.353379 | orchestrator | 2025-01-16 15:02:42.353392 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-01-16 15:02:42.353404 | orchestrator | Thursday 16 January 2025 15:02:20 +0000 (0:00:39.211) 0:01:22.936 ****** 2025-01-16 15:02:42.353417 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:02:42.353429 | orchestrator | 2025-01-16 15:02:42.353442 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-01-16 15:02:42.353454 | orchestrator | Thursday 16 January 2025 15:02:21 +0000 (0:00:00.534) 0:01:23.471 ****** 2025-01-16 15:02:42.353467 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:02:42.353479 | orchestrator | 2025-01-16 15:02:42.353492 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-01-16 15:02:42.353504 | orchestrator | Thursday 16 January 2025 15:02:21 +0000 (0:00:00.214) 0:01:23.686 ****** 2025-01-16 15:02:42.353517 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:02:42.353530 | orchestrator | 2025-01-16 15:02:42.353542 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-01-16 15:02:42.353555 | orchestrator | Thursday 16 January 2025 15:02:23 +0000 (0:00:01.667) 0:01:25.353 ****** 2025-01-16 15:02:42.353567 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:02:42.353579 | orchestrator | 2025-01-16 15:02:42.353592 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-01-16 15:02:42.353613 | orchestrator | 2025-01-16 15:02:42.353625 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-01-16 15:02:42.353638 | orchestrator | Thursday 16 January 2025 15:02:30 +0000 (0:00:07.348) 0:01:32.701 ****** 2025-01-16 15:02:42.353650 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:02:42.353662 | orchestrator | 2025-01-16 15:02:42.353675 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-01-16 15:02:42.353717 | orchestrator | Thursday 16 January 2025 15:02:31 +0000 (0:00:00.493) 0:01:33.195 ****** 2025-01-16 15:02:42.353730 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:02:42.353743 | orchestrator | 2025-01-16 15:02:42.353755 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-01-16 15:02:42.353768 | orchestrator | Thursday 16 January 2025 15:02:31 +0000 (0:00:00.182) 0:01:33.378 ****** 2025-01-16 15:02:42.353780 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:02:42.353798 | orchestrator | 2025-01-16 15:02:42.353817 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-01-16 15:02:42.353831 | orchestrator | Thursday 16 January 2025 15:02:32 +0000 (0:00:01.247) 0:01:34.625 ****** 2025-01-16 15:02:42.353843 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:02:42.353855 | orchestrator | 2025-01-16 15:02:42.353868 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-01-16 15:02:42.353880 | orchestrator | 2025-01-16 15:02:42.353893 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-01-16 15:02:42.353905 | orchestrator | Thursday 16 January 2025 15:02:40 +0000 (0:00:07.608) 0:01:42.234 ****** 2025-01-16 15:02:42.353918 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:02:42.353930 | orchestrator | 2025-01-16 15:02:42.353943 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-01-16 15:02:42.353955 | orchestrator | Thursday 16 January 2025 15:02:40 +0000 (0:00:00.411) 0:01:42.645 ****** 2025-01-16 15:02:42.353968 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-01-16 15:02:42.353980 | orchestrator | enable_outward_rabbitmq_True 2025-01-16 15:02:42.353994 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-01-16 15:02:42.354006 | orchestrator | outward_rabbitmq_restart 2025-01-16 15:02:42.354067 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:02:42.354083 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:02:42.354095 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:02:42.354108 | orchestrator | 2025-01-16 15:02:42.354121 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-01-16 15:02:42.354134 | orchestrator | skipping: no hosts matched 2025-01-16 15:02:42.354146 | orchestrator | 2025-01-16 15:02:42.354159 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-01-16 15:02:42.354171 | orchestrator | skipping: no hosts matched 2025-01-16 15:02:42.354184 | orchestrator | 2025-01-16 15:02:42.354196 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-01-16 15:02:42.354209 | orchestrator | skipping: no hosts matched 2025-01-16 15:02:42.354222 | orchestrator | 2025-01-16 15:02:42.354234 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:02:42.354247 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-01-16 15:02:42.354261 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-01-16 15:02:42.354274 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:02:42.354287 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:02:42.354307 | orchestrator | 2025-01-16 15:02:42.354320 | orchestrator | 2025-01-16 15:02:42.354332 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:02:42.354345 | orchestrator | Thursday 16 January 2025 15:02:41 +0000 (0:00:01.462) 0:01:44.108 ****** 2025-01-16 15:02:42.354357 | orchestrator | =============================================================================== 2025-01-16 15:02:42.354370 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 54.17s 2025-01-16 15:02:42.354388 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.45s 2025-01-16 15:02:42.354400 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.16s 2025-01-16 15:02:42.354413 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 3.00s 2025-01-16 15:02:42.354425 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.67s 2025-01-16 15:02:42.354438 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.52s 2025-01-16 15:02:42.354450 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.18s 2025-01-16 15:02:42.354463 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.82s 2025-01-16 15:02:42.354475 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.80s 2025-01-16 15:02:42.354487 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.57s 2025-01-16 15:02:42.354500 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.54s 2025-01-16 15:02:42.354512 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 1.46s 2025-01-16 15:02:42.354525 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.43s 2025-01-16 15:02:42.354537 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.39s 2025-01-16 15:02:42.354550 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.36s 2025-01-16 15:02:42.354563 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.26s 2025-01-16 15:02:42.354575 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.18s 2025-01-16 15:02:42.354588 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-01-16 15:02:42.354600 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 1.10s 2025-01-16 15:02:42.354613 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 0.97s 2025-01-16 15:02:42.354625 | orchestrator | 2025-01-16 15:02:42 | INFO  | Task b9be7df1-7991-4405-ac6d-292561267f67 is in state SUCCESS 2025-01-16 15:02:42.354647 | orchestrator | 2025-01-16 15:02:42 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:45.377081 | orchestrator | 2025-01-16 15:02:42 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:45.377188 | orchestrator | 2025-01-16 15:02:42 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:45.377203 | orchestrator | 2025-01-16 15:02:42 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:45.377214 | orchestrator | 2025-01-16 15:02:42 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:45.377238 | orchestrator | 2025-01-16 15:02:45 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:45.377626 | orchestrator | 2025-01-16 15:02:45 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:45.377650 | orchestrator | 2025-01-16 15:02:45 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:45.378170 | orchestrator | 2025-01-16 15:02:45 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:48.406565 | orchestrator | 2025-01-16 15:02:45 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:48.406880 | orchestrator | 2025-01-16 15:02:48 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:48.412892 | orchestrator | 2025-01-16 15:02:48 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:48.412998 | orchestrator | 2025-01-16 15:02:48 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:48.413028 | orchestrator | 2025-01-16 15:02:48 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:51.447387 | orchestrator | 2025-01-16 15:02:48 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:51.447499 | orchestrator | 2025-01-16 15:02:51 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:51.448420 | orchestrator | 2025-01-16 15:02:51 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:51.448446 | orchestrator | 2025-01-16 15:02:51 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:51.448465 | orchestrator | 2025-01-16 15:02:51 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:54.474225 | orchestrator | 2025-01-16 15:02:51 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:54.474419 | orchestrator | 2025-01-16 15:02:54 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:54.475038 | orchestrator | 2025-01-16 15:02:54 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:54.475072 | orchestrator | 2025-01-16 15:02:54 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:54.475089 | orchestrator | 2025-01-16 15:02:54 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:02:57.498267 | orchestrator | 2025-01-16 15:02:54 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:02:57.498399 | orchestrator | 2025-01-16 15:02:57 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:02:57.498624 | orchestrator | 2025-01-16 15:02:57 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:02:57.498664 | orchestrator | 2025-01-16 15:02:57 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:02:57.499375 | orchestrator | 2025-01-16 15:02:57 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:00.529286 | orchestrator | 2025-01-16 15:02:57 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:00.529390 | orchestrator | 2025-01-16 15:03:00 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:03.556411 | orchestrator | 2025-01-16 15:03:00 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:03.556549 | orchestrator | 2025-01-16 15:03:00 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:03.556567 | orchestrator | 2025-01-16 15:03:00 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:03.556580 | orchestrator | 2025-01-16 15:03:00 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:03.556607 | orchestrator | 2025-01-16 15:03:03 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:03.557558 | orchestrator | 2025-01-16 15:03:03 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:03.557616 | orchestrator | 2025-01-16 15:03:03 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:03.557662 | orchestrator | 2025-01-16 15:03:03 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:06.585061 | orchestrator | 2025-01-16 15:03:03 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:06.585228 | orchestrator | 2025-01-16 15:03:06 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:06.585381 | orchestrator | 2025-01-16 15:03:06 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:06.586005 | orchestrator | 2025-01-16 15:03:06 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:09.633981 | orchestrator | 2025-01-16 15:03:06 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:09.634115 | orchestrator | 2025-01-16 15:03:06 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:09.634138 | orchestrator | 2025-01-16 15:03:09 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:09.644087 | orchestrator | 2025-01-16 15:03:09 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:09.644231 | orchestrator | 2025-01-16 15:03:09 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:09.646793 | orchestrator | 2025-01-16 15:03:09 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:12.676222 | orchestrator | 2025-01-16 15:03:09 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:12.676371 | orchestrator | 2025-01-16 15:03:12 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:15.700463 | orchestrator | 2025-01-16 15:03:12 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:15.700591 | orchestrator | 2025-01-16 15:03:12 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:15.700611 | orchestrator | 2025-01-16 15:03:12 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:15.700625 | orchestrator | 2025-01-16 15:03:12 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:15.700656 | orchestrator | 2025-01-16 15:03:15 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:15.700906 | orchestrator | 2025-01-16 15:03:15 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:15.700937 | orchestrator | 2025-01-16 15:03:15 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:15.701598 | orchestrator | 2025-01-16 15:03:15 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:18.732916 | orchestrator | 2025-01-16 15:03:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:18.733038 | orchestrator | 2025-01-16 15:03:18 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:18.734212 | orchestrator | 2025-01-16 15:03:18 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:18.735361 | orchestrator | 2025-01-16 15:03:18 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:18.735450 | orchestrator | 2025-01-16 15:03:18 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:21.771285 | orchestrator | 2025-01-16 15:03:18 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:21.771496 | orchestrator | 2025-01-16 15:03:21 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:21.772117 | orchestrator | 2025-01-16 15:03:21 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:21.772186 | orchestrator | 2025-01-16 15:03:21 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:21.772204 | orchestrator | 2025-01-16 15:03:21 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:24.799183 | orchestrator | 2025-01-16 15:03:21 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:24.799425 | orchestrator | 2025-01-16 15:03:24 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:24.800447 | orchestrator | 2025-01-16 15:03:24 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:24.800491 | orchestrator | 2025-01-16 15:03:24 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:24.801170 | orchestrator | 2025-01-16 15:03:24 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:24.802467 | orchestrator | 2025-01-16 15:03:24 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:27.831245 | orchestrator | 2025-01-16 15:03:27 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:27.832393 | orchestrator | 2025-01-16 15:03:27 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:27.832432 | orchestrator | 2025-01-16 15:03:27 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:27.832454 | orchestrator | 2025-01-16 15:03:27 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:30.878747 | orchestrator | 2025-01-16 15:03:27 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:30.878882 | orchestrator | 2025-01-16 15:03:30 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:33.891107 | orchestrator | 2025-01-16 15:03:30 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:33.891222 | orchestrator | 2025-01-16 15:03:30 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state STARTED 2025-01-16 15:03:33.891239 | orchestrator | 2025-01-16 15:03:30 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:33.891252 | orchestrator | 2025-01-16 15:03:30 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:33.891356 | orchestrator | 2025-01-16 15:03:33 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:33.893422 | orchestrator | 2025-01-16 15:03:33 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:33.893476 | orchestrator | 2025-01-16 15:03:33 | INFO  | Task 3c6588af-5421-4033-babf-7a0ad4d64bc5 is in state SUCCESS 2025-01-16 15:03:33.893492 | orchestrator | 2025-01-16 15:03:33.893498 | orchestrator | 2025-01-16 15:03:33.893503 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:03:33.893509 | orchestrator | 2025-01-16 15:03:33.893515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:03:33.893521 | orchestrator | Thursday 16 January 2025 15:01:47 +0000 (0:00:00.173) 0:00:00.173 ****** 2025-01-16 15:03:33.893527 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.893533 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.893539 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.893544 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:03:33.893550 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:03:33.893555 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:03:33.893560 | orchestrator | 2025-01-16 15:03:33.893566 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:03:33.893571 | orchestrator | Thursday 16 January 2025 15:01:47 +0000 (0:00:00.522) 0:00:00.696 ****** 2025-01-16 15:03:33.893590 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-01-16 15:03:33.893596 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-01-16 15:03:33.893601 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-01-16 15:03:33.893606 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-01-16 15:03:33.893611 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-01-16 15:03:33.893616 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-01-16 15:03:33.893622 | orchestrator | 2025-01-16 15:03:33.893627 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-01-16 15:03:33.893632 | orchestrator | 2025-01-16 15:03:33.893638 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-01-16 15:03:33.893643 | orchestrator | Thursday 16 January 2025 15:01:48 +0000 (0:00:01.017) 0:00:01.713 ****** 2025-01-16 15:03:33.893649 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:03:33.893686 | orchestrator | 2025-01-16 15:03:33.893693 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-01-16 15:03:33.893698 | orchestrator | Thursday 16 January 2025 15:01:49 +0000 (0:00:00.966) 0:00:02.680 ****** 2025-01-16 15:03:33.893705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893745 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893755 | orchestrator | 2025-01-16 15:03:33.893760 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-01-16 15:03:33.893766 | orchestrator | Thursday 16 January 2025 15:01:50 +0000 (0:00:00.805) 0:00:03.485 ****** 2025-01-16 15:03:33.893771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893806 | orchestrator | 2025-01-16 15:03:33.893811 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-01-16 15:03:33.893819 | orchestrator | Thursday 16 January 2025 15:01:52 +0000 (0:00:01.393) 0:00:04.878 ****** 2025-01-16 15:03:33.893825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.893863 | orchestrator | 2025-01-16 15:03:33.893868 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-01-16 15:03:33.893874 | orchestrator | Thursday 16 January 2025 15:01:53 +0000 (0:00:01.001) 0:00:05.880 ****** 2025-01-16 15:03:33.893879 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:03:33.893886 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:03:33.893891 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:03:33.893896 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:03:33.893901 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:03:33.893907 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:03:33.893912 | orchestrator | 2025-01-16 15:03:33.893917 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-01-16 15:03:33.893923 | orchestrator | Thursday 16 January 2025 15:01:55 +0000 (0:00:01.922) 0:00:07.802 ****** 2025-01-16 15:03:33.893928 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-01-16 15:03:33.893935 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-01-16 15:03:33.893941 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-01-16 15:03:33.893946 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-01-16 15:03:33.893951 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-01-16 15:03:33.893956 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-01-16 15:03:33.893961 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-01-16 15:03:33.893966 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-01-16 15:03:33.893971 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-01-16 15:03:33.893979 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-01-16 15:03:33.893985 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-01-16 15:03:33.893990 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-01-16 15:03:33.893995 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-01-16 15:03:33.894002 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-01-16 15:03:33.894007 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-01-16 15:03:33.894059 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-01-16 15:03:33.894067 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-01-16 15:03:33.894077 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-01-16 15:03:33.894083 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-01-16 15:03:33.894089 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-01-16 15:03:33.894095 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-01-16 15:03:33.894101 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-01-16 15:03:33.894107 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-01-16 15:03:33.894112 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-01-16 15:03:33.894118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-01-16 15:03:33.894123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-01-16 15:03:33.894129 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-01-16 15:03:33.894134 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-01-16 15:03:33.894140 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-01-16 15:03:33.894145 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-01-16 15:03:33.894151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-01-16 15:03:33.894156 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-01-16 15:03:33.894162 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-01-16 15:03:33.894167 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-01-16 15:03:33.894173 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-01-16 15:03:33.894178 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-01-16 15:03:33.894184 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-01-16 15:03:33.894189 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-01-16 15:03:33.894200 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-01-16 15:03:33.894206 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-01-16 15:03:33.894211 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-01-16 15:03:33.894219 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-01-16 15:03:33.894225 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-01-16 15:03:33.894231 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-01-16 15:03:33.894236 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-01-16 15:03:33.894242 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-01-16 15:03:33.894247 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-01-16 15:03:33.894253 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-01-16 15:03:33.894259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-01-16 15:03:33.894264 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-01-16 15:03:33.894270 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-01-16 15:03:33.894276 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-01-16 15:03:33.894281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-01-16 15:03:33.894289 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-01-16 15:03:33.894294 | orchestrator | 2025-01-16 15:03:33.894300 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-01-16 15:03:33.894305 | orchestrator | Thursday 16 January 2025 15:02:08 +0000 (0:00:13.254) 0:00:21.057 ****** 2025-01-16 15:03:33.894311 | orchestrator | 2025-01-16 15:03:33.894316 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-01-16 15:03:33.894322 | orchestrator | Thursday 16 January 2025 15:02:08 +0000 (0:00:00.060) 0:00:21.117 ****** 2025-01-16 15:03:33.894328 | orchestrator | 2025-01-16 15:03:33.894334 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-01-16 15:03:33.894339 | orchestrator | Thursday 16 January 2025 15:02:08 +0000 (0:00:00.289) 0:00:21.407 ****** 2025-01-16 15:03:33.894345 | orchestrator | 2025-01-16 15:03:33.894350 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-01-16 15:03:33.894355 | orchestrator | Thursday 16 January 2025 15:02:08 +0000 (0:00:00.083) 0:00:21.490 ****** 2025-01-16 15:03:33.894361 | orchestrator | 2025-01-16 15:03:33.894366 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-01-16 15:03:33.894372 | orchestrator | Thursday 16 January 2025 15:02:08 +0000 (0:00:00.086) 0:00:21.577 ****** 2025-01-16 15:03:33.894378 | orchestrator | 2025-01-16 15:03:33.894383 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-01-16 15:03:33.894389 | orchestrator | Thursday 16 January 2025 15:02:08 +0000 (0:00:00.079) 0:00:21.657 ****** 2025-01-16 15:03:33.894394 | orchestrator | 2025-01-16 15:03:33.894403 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-01-16 15:03:33.894408 | orchestrator | Thursday 16 January 2025 15:02:09 +0000 (0:00:00.191) 0:00:21.849 ****** 2025-01-16 15:03:33.894413 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:03:33.894418 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:03:33.894423 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:03:33.894428 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:03:33.894433 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:03:33.894438 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:03:33.894443 | orchestrator | 2025-01-16 15:03:33.894450 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-01-16 15:03:33.894455 | orchestrator | 2025-01-16 15:03:33.894460 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-01-16 15:03:33.894465 | orchestrator | Thursday 16 January 2025 15:02:21 +0000 (0:00:12.783) 0:00:34.632 ****** 2025-01-16 15:03:33.894470 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:03:33.894475 | orchestrator | 2025-01-16 15:03:33.894480 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-01-16 15:03:33.894486 | orchestrator | Thursday 16 January 2025 15:02:22 +0000 (0:00:00.775) 0:00:35.408 ****** 2025-01-16 15:03:33.894491 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:03:33.894496 | orchestrator | 2025-01-16 15:03:33.894501 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-01-16 15:03:33.894506 | orchestrator | Thursday 16 January 2025 15:02:23 +0000 (0:00:00.840) 0:00:36.248 ****** 2025-01-16 15:03:33.894510 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.894516 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.894521 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.894526 | orchestrator | 2025-01-16 15:03:33.894531 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-01-16 15:03:33.894535 | orchestrator | Thursday 16 January 2025 15:02:24 +0000 (0:00:01.050) 0:00:37.299 ****** 2025-01-16 15:03:33.894540 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.894545 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.894550 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.894555 | orchestrator | 2025-01-16 15:03:33.894560 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-01-16 15:03:33.894565 | orchestrator | Thursday 16 January 2025 15:02:25 +0000 (0:00:01.052) 0:00:38.351 ****** 2025-01-16 15:03:33.894570 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.894575 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.894580 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.894585 | orchestrator | 2025-01-16 15:03:33.894590 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-01-16 15:03:33.894595 | orchestrator | Thursday 16 January 2025 15:02:26 +0000 (0:00:00.752) 0:00:39.103 ****** 2025-01-16 15:03:33.894600 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.894605 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.894609 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.894614 | orchestrator | 2025-01-16 15:03:33.894619 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-01-16 15:03:33.894624 | orchestrator | Thursday 16 January 2025 15:02:27 +0000 (0:00:00.736) 0:00:39.840 ****** 2025-01-16 15:03:33.894637 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.894642 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.894647 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.894666 | orchestrator | 2025-01-16 15:03:33.894672 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-01-16 15:03:33.894677 | orchestrator | Thursday 16 January 2025 15:02:28 +0000 (0:00:00.927) 0:00:40.768 ****** 2025-01-16 15:03:33.894681 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894686 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894695 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894700 | orchestrator | 2025-01-16 15:03:33.894705 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-01-16 15:03:33.894710 | orchestrator | Thursday 16 January 2025 15:02:28 +0000 (0:00:00.505) 0:00:41.273 ****** 2025-01-16 15:03:33.894715 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894720 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894725 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894730 | orchestrator | 2025-01-16 15:03:33.894735 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-01-16 15:03:33.894740 | orchestrator | Thursday 16 January 2025 15:02:28 +0000 (0:00:00.438) 0:00:41.711 ****** 2025-01-16 15:03:33.894745 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894753 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894758 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894765 | orchestrator | 2025-01-16 15:03:33.894770 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-01-16 15:03:33.894775 | orchestrator | Thursday 16 January 2025 15:02:29 +0000 (0:00:00.230) 0:00:41.942 ****** 2025-01-16 15:03:33.894780 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894785 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894790 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894795 | orchestrator | 2025-01-16 15:03:33.894800 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-01-16 15:03:33.894805 | orchestrator | Thursday 16 January 2025 15:02:29 +0000 (0:00:00.349) 0:00:42.291 ****** 2025-01-16 15:03:33.894810 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894815 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894820 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894824 | orchestrator | 2025-01-16 15:03:33.894829 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-01-16 15:03:33.894834 | orchestrator | Thursday 16 January 2025 15:02:29 +0000 (0:00:00.419) 0:00:42.710 ****** 2025-01-16 15:03:33.894839 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894845 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894850 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894854 | orchestrator | 2025-01-16 15:03:33.894859 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-01-16 15:03:33.894864 | orchestrator | Thursday 16 January 2025 15:02:30 +0000 (0:00:00.202) 0:00:42.912 ****** 2025-01-16 15:03:33.894869 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894875 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894880 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894884 | orchestrator | 2025-01-16 15:03:33.894889 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-01-16 15:03:33.894894 | orchestrator | Thursday 16 January 2025 15:02:30 +0000 (0:00:00.451) 0:00:43.364 ****** 2025-01-16 15:03:33.894899 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894904 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894909 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894914 | orchestrator | 2025-01-16 15:03:33.894919 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-01-16 15:03:33.894929 | orchestrator | Thursday 16 January 2025 15:02:30 +0000 (0:00:00.292) 0:00:43.657 ****** 2025-01-16 15:03:33.894934 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894939 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894944 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894950 | orchestrator | 2025-01-16 15:03:33.894955 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-01-16 15:03:33.894959 | orchestrator | Thursday 16 January 2025 15:02:31 +0000 (0:00:00.410) 0:00:44.067 ****** 2025-01-16 15:03:33.894965 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.894970 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.894975 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.894982 | orchestrator | 2025-01-16 15:03:33.894988 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-01-16 15:03:33.894992 | orchestrator | Thursday 16 January 2025 15:02:31 +0000 (0:00:00.472) 0:00:44.540 ****** 2025-01-16 15:03:33.894997 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.895003 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895007 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895012 | orchestrator | 2025-01-16 15:03:33.895017 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-01-16 15:03:33.895022 | orchestrator | Thursday 16 January 2025 15:02:32 +0000 (0:00:00.285) 0:00:44.825 ****** 2025-01-16 15:03:33.895027 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.895032 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895037 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895042 | orchestrator | 2025-01-16 15:03:33.895047 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-01-16 15:03:33.895052 | orchestrator | Thursday 16 January 2025 15:02:32 +0000 (0:00:00.356) 0:00:45.182 ****** 2025-01-16 15:03:33.895056 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:03:33.895061 | orchestrator | 2025-01-16 15:03:33.895066 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-01-16 15:03:33.895071 | orchestrator | Thursday 16 January 2025 15:02:33 +0000 (0:00:01.019) 0:00:46.201 ****** 2025-01-16 15:03:33.895077 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.895082 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.895086 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.895091 | orchestrator | 2025-01-16 15:03:33.895096 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-01-16 15:03:33.895101 | orchestrator | Thursday 16 January 2025 15:02:34 +0000 (0:00:00.860) 0:00:47.061 ****** 2025-01-16 15:03:33.895106 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.895111 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.895116 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.895120 | orchestrator | 2025-01-16 15:03:33.895125 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-01-16 15:03:33.895130 | orchestrator | Thursday 16 January 2025 15:02:34 +0000 (0:00:00.652) 0:00:47.714 ****** 2025-01-16 15:03:33.895135 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.895140 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895145 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895150 | orchestrator | 2025-01-16 15:03:33.895154 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-01-16 15:03:33.895159 | orchestrator | Thursday 16 January 2025 15:02:35 +0000 (0:00:00.697) 0:00:48.411 ****** 2025-01-16 15:03:33.895164 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.895169 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895174 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895179 | orchestrator | 2025-01-16 15:03:33.895183 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-01-16 15:03:33.895189 | orchestrator | Thursday 16 January 2025 15:02:36 +0000 (0:00:00.506) 0:00:48.918 ****** 2025-01-16 15:03:33.895197 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.895202 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895207 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895212 | orchestrator | 2025-01-16 15:03:33.895217 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-01-16 15:03:33.895222 | orchestrator | Thursday 16 January 2025 15:02:36 +0000 (0:00:00.647) 0:00:49.565 ****** 2025-01-16 15:03:33.895226 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.895231 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895236 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895241 | orchestrator | 2025-01-16 15:03:33.895246 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-01-16 15:03:33.895254 | orchestrator | Thursday 16 January 2025 15:02:37 +0000 (0:00:00.390) 0:00:49.956 ****** 2025-01-16 15:03:33.895259 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.895264 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895269 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895274 | orchestrator | 2025-01-16 15:03:33.895279 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-01-16 15:03:33.895284 | orchestrator | Thursday 16 January 2025 15:02:37 +0000 (0:00:00.340) 0:00:50.296 ****** 2025-01-16 15:03:33.895288 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.895293 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895298 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895306 | orchestrator | 2025-01-16 15:03:33.895310 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-01-16 15:03:33.895315 | orchestrator | Thursday 16 January 2025 15:02:37 +0000 (0:00:00.394) 0:00:50.690 ****** 2025-01-16 15:03:33.895322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895379 | orchestrator | 2025-01-16 15:03:33.895385 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-01-16 15:03:33.895390 | orchestrator | Thursday 16 January 2025 15:02:39 +0000 (0:00:01.225) 0:00:51.915 ****** 2025-01-16 15:03:33.895395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895451 | orchestrator | 2025-01-16 15:03:33.895456 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-01-16 15:03:33.895461 | orchestrator | Thursday 16 January 2025 15:02:42 +0000 (0:00:03.156) 0:00:55.072 ****** 2025-01-16 15:03:33.895466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895519 | orchestrator | 2025-01-16 15:03:33.895527 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-01-16 15:03:33.895534 | orchestrator | Thursday 16 January 2025 15:02:44 +0000 (0:00:01.769) 0:00:56.842 ****** 2025-01-16 15:03:33.895539 | orchestrator | 2025-01-16 15:03:33.895544 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-01-16 15:03:33.895549 | orchestrator | Thursday 16 January 2025 15:02:44 +0000 (0:00:00.121) 0:00:56.963 ****** 2025-01-16 15:03:33.895554 | orchestrator | 2025-01-16 15:03:33.895559 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-01-16 15:03:33.895564 | orchestrator | Thursday 16 January 2025 15:02:44 +0000 (0:00:00.039) 0:00:57.002 ****** 2025-01-16 15:03:33.895569 | orchestrator | 2025-01-16 15:03:33.895574 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-01-16 15:03:33.895579 | orchestrator | Thursday 16 January 2025 15:02:44 +0000 (0:00:00.041) 0:00:57.044 ****** 2025-01-16 15:03:33.895584 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:03:33.895589 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:03:33.895594 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:03:33.895599 | orchestrator | 2025-01-16 15:03:33.895604 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-01-16 15:03:33.895609 | orchestrator | Thursday 16 January 2025 15:02:46 +0000 (0:00:01.862) 0:00:58.906 ****** 2025-01-16 15:03:33.895614 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:03:33.895619 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:03:33.895624 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:03:33.895629 | orchestrator | 2025-01-16 15:03:33.895634 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-01-16 15:03:33.895639 | orchestrator | Thursday 16 January 2025 15:02:52 +0000 (0:00:06.604) 0:01:05.511 ****** 2025-01-16 15:03:33.895644 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:03:33.895649 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:03:33.895654 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:03:33.895697 | orchestrator | 2025-01-16 15:03:33.895703 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-01-16 15:03:33.895708 | orchestrator | Thursday 16 January 2025 15:02:59 +0000 (0:00:06.435) 0:01:11.947 ****** 2025-01-16 15:03:33.895712 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.895718 | orchestrator | 2025-01-16 15:03:33.895743 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-01-16 15:03:33.895749 | orchestrator | Thursday 16 January 2025 15:02:59 +0000 (0:00:00.152) 0:01:12.099 ****** 2025-01-16 15:03:33.895754 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.895759 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.895764 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.895769 | orchestrator | 2025-01-16 15:03:33.895774 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-01-16 15:03:33.895779 | orchestrator | Thursday 16 January 2025 15:02:59 +0000 (0:00:00.528) 0:01:12.628 ****** 2025-01-16 15:03:33.895784 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895789 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:03:33.895797 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895802 | orchestrator | 2025-01-16 15:03:33.895807 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-01-16 15:03:33.895812 | orchestrator | Thursday 16 January 2025 15:03:00 +0000 (0:00:00.448) 0:01:13.077 ****** 2025-01-16 15:03:33.895818 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.895823 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.895827 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.895832 | orchestrator | 2025-01-16 15:03:33.895837 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-01-16 15:03:33.895842 | orchestrator | Thursday 16 January 2025 15:03:01 +0000 (0:00:00.690) 0:01:13.767 ****** 2025-01-16 15:03:33.895847 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.895852 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.895857 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:03:33.895862 | orchestrator | 2025-01-16 15:03:33.895866 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-01-16 15:03:33.895872 | orchestrator | Thursday 16 January 2025 15:03:01 +0000 (0:00:00.397) 0:01:14.165 ****** 2025-01-16 15:03:33.895877 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.895881 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.895887 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.895892 | orchestrator | 2025-01-16 15:03:33.895897 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-01-16 15:03:33.895901 | orchestrator | Thursday 16 January 2025 15:03:02 +0000 (0:00:00.638) 0:01:14.803 ****** 2025-01-16 15:03:33.895906 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.895911 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.895916 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.895921 | orchestrator | 2025-01-16 15:03:33.895926 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-01-16 15:03:33.895931 | orchestrator | Thursday 16 January 2025 15:03:03 +0000 (0:00:01.012) 0:01:15.816 ****** 2025-01-16 15:03:33.895936 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.895941 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.895946 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.895951 | orchestrator | 2025-01-16 15:03:33.895956 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-01-16 15:03:33.895961 | orchestrator | Thursday 16 January 2025 15:03:03 +0000 (0:00:00.484) 0:01:16.301 ****** 2025-01-16 15:03:33.895973 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895981 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895986 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.895992 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896005 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896011 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896016 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896020 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896025 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896030 | orchestrator | 2025-01-16 15:03:33.896035 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-01-16 15:03:33.896040 | orchestrator | Thursday 16 January 2025 15:03:04 +0000 (0:00:01.035) 0:01:17.336 ****** 2025-01-16 15:03:33.896046 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896051 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896059 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896064 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896088 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896101 | orchestrator | 2025-01-16 15:03:33.896106 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-01-16 15:03:33.896113 | orchestrator | Thursday 16 January 2025 15:03:07 +0000 (0:00:02.739) 0:01:20.076 ****** 2025-01-16 15:03:33.896118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896123 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896129 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896137 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896148 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896155 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896160 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896166 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896171 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:03:33.896176 | orchestrator | 2025-01-16 15:03:33.896181 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-01-16 15:03:33.896186 | orchestrator | Thursday 16 January 2025 15:03:09 +0000 (0:00:02.529) 0:01:22.606 ****** 2025-01-16 15:03:33.896191 | orchestrator | 2025-01-16 15:03:33.896196 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-01-16 15:03:33.896202 | orchestrator | Thursday 16 January 2025 15:03:09 +0000 (0:00:00.042) 0:01:22.648 ****** 2025-01-16 15:03:33.896207 | orchestrator | 2025-01-16 15:03:33.896211 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-01-16 15:03:33.896216 | orchestrator | Thursday 16 January 2025 15:03:09 +0000 (0:00:00.041) 0:01:22.689 ****** 2025-01-16 15:03:33.896221 | orchestrator | 2025-01-16 15:03:33.896226 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-01-16 15:03:33.896231 | orchestrator | Thursday 16 January 2025 15:03:10 +0000 (0:00:00.132) 0:01:22.822 ****** 2025-01-16 15:03:33.896236 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:03:33.896241 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:03:33.896246 | orchestrator | 2025-01-16 15:03:33.896251 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-01-16 15:03:33.896256 | orchestrator | Thursday 16 January 2025 15:03:15 +0000 (0:00:05.921) 0:01:28.744 ****** 2025-01-16 15:03:33.896261 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:03:33.896266 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:03:33.896271 | orchestrator | 2025-01-16 15:03:33.896276 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-01-16 15:03:33.896281 | orchestrator | Thursday 16 January 2025 15:03:22 +0000 (0:00:06.293) 0:01:35.037 ****** 2025-01-16 15:03:33.896286 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:03:33.896291 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:03:33.896300 | orchestrator | 2025-01-16 15:03:33.896305 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-01-16 15:03:33.896310 | orchestrator | Thursday 16 January 2025 15:03:28 +0000 (0:00:06.252) 0:01:41.289 ****** 2025-01-16 15:03:33.896315 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:03:33.896320 | orchestrator | 2025-01-16 15:03:33.896325 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-01-16 15:03:33.896330 | orchestrator | Thursday 16 January 2025 15:03:28 +0000 (0:00:00.085) 0:01:41.375 ****** 2025-01-16 15:03:33.896335 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.896340 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.896345 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.896350 | orchestrator | 2025-01-16 15:03:33.896355 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-01-16 15:03:33.896360 | orchestrator | Thursday 16 January 2025 15:03:29 +0000 (0:00:00.687) 0:01:42.062 ****** 2025-01-16 15:03:33.896368 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.896373 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:03:33.896378 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.896383 | orchestrator | 2025-01-16 15:03:33.896388 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-01-16 15:03:33.896393 | orchestrator | Thursday 16 January 2025 15:03:29 +0000 (0:00:00.422) 0:01:42.484 ****** 2025-01-16 15:03:33.896398 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.896403 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.896408 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.896413 | orchestrator | 2025-01-16 15:03:33.896418 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-01-16 15:03:33.896423 | orchestrator | Thursday 16 January 2025 15:03:30 +0000 (0:00:00.545) 0:01:43.029 ****** 2025-01-16 15:03:33.896428 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:03:33.896433 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:03:33.896438 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:03:33.896443 | orchestrator | 2025-01-16 15:03:33.896448 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-01-16 15:03:33.896453 | orchestrator | Thursday 16 January 2025 15:03:30 +0000 (0:00:00.402) 0:01:43.432 ****** 2025-01-16 15:03:33.896458 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.896463 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.896473 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.896478 | orchestrator | 2025-01-16 15:03:33.896483 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-01-16 15:03:33.896489 | orchestrator | Thursday 16 January 2025 15:03:31 +0000 (0:00:00.598) 0:01:44.030 ****** 2025-01-16 15:03:33.896494 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:03:33.896499 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:03:33.896504 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:03:33.896508 | orchestrator | 2025-01-16 15:03:33.896514 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:03:33.896519 | orchestrator | testbed-node-0 : ok=41  changed=16  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-01-16 15:03:33.896524 | orchestrator | testbed-node-1 : ok=40  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-01-16 15:03:33.896529 | orchestrator | testbed-node-2 : ok=40  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-01-16 15:03:33.896534 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:03:33.896541 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:03:33.896546 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:03:33.896555 | orchestrator | 2025-01-16 15:03:33.896560 | orchestrator | 2025-01-16 15:03:33.896565 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:03:33.896570 | orchestrator | Thursday 16 January 2025 15:03:32 +0000 (0:00:00.827) 0:01:44.857 ****** 2025-01-16 15:03:33.896575 | orchestrator | =============================================================================== 2025-01-16 15:03:33.896582 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 13.25s 2025-01-16 15:03:33.896587 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.90s 2025-01-16 15:03:33.896592 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 12.78s 2025-01-16 15:03:33.896597 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 12.69s 2025-01-16 15:03:33.896602 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 7.78s 2025-01-16 15:03:33.896607 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.16s 2025-01-16 15:03:33.896612 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 2.74s 2025-01-16 15:03:33.896618 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.53s 2025-01-16 15:03:33.896622 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 1.92s 2025-01-16 15:03:33.896628 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.77s 2025-01-16 15:03:33.896633 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.39s 2025-01-16 15:03:33.896638 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.23s 2025-01-16 15:03:33.896642 | orchestrator | ovn-db : Divide hosts by their OVN NB volume availability --------------- 1.05s 2025-01-16 15:03:33.896647 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.05s 2025-01-16 15:03:33.896652 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.04s 2025-01-16 15:03:33.896670 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.02s 2025-01-16 15:03:33.896676 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2025-01-16 15:03:33.896681 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.01s 2025-01-16 15:03:33.896686 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.00s 2025-01-16 15:03:33.896691 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 0.97s 2025-01-16 15:03:33.896698 | orchestrator | 2025-01-16 15:03:33 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:36.920174 | orchestrator | 2025-01-16 15:03:33 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:36.920323 | orchestrator | 2025-01-16 15:03:36 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:39.946862 | orchestrator | 2025-01-16 15:03:36 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:39.946991 | orchestrator | 2025-01-16 15:03:36 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:39.947012 | orchestrator | 2025-01-16 15:03:36 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:39.947045 | orchestrator | 2025-01-16 15:03:39 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:39.948560 | orchestrator | 2025-01-16 15:03:39 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:39.948818 | orchestrator | 2025-01-16 15:03:39 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:42.973610 | orchestrator | 2025-01-16 15:03:39 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:42.973783 | orchestrator | 2025-01-16 15:03:42 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:45.998438 | orchestrator | 2025-01-16 15:03:42 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:45.998566 | orchestrator | 2025-01-16 15:03:42 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:45.998587 | orchestrator | 2025-01-16 15:03:42 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:45.998621 | orchestrator | 2025-01-16 15:03:45 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:49.028082 | orchestrator | 2025-01-16 15:03:45 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:49.028167 | orchestrator | 2025-01-16 15:03:45 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:49.028175 | orchestrator | 2025-01-16 15:03:45 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:49.028193 | orchestrator | 2025-01-16 15:03:49 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:49.028291 | orchestrator | 2025-01-16 15:03:49 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:49.028301 | orchestrator | 2025-01-16 15:03:49 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:52.060590 | orchestrator | 2025-01-16 15:03:49 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:52.061278 | orchestrator | 2025-01-16 15:03:52 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:55.080691 | orchestrator | 2025-01-16 15:03:52 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:55.080781 | orchestrator | 2025-01-16 15:03:52 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:55.080792 | orchestrator | 2025-01-16 15:03:52 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:55.080813 | orchestrator | 2025-01-16 15:03:55 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:55.081156 | orchestrator | 2025-01-16 15:03:55 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:55.081176 | orchestrator | 2025-01-16 15:03:55 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:58.101399 | orchestrator | 2025-01-16 15:03:55 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:03:58.101600 | orchestrator | 2025-01-16 15:03:58 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:03:58.101905 | orchestrator | 2025-01-16 15:03:58 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:03:58.101933 | orchestrator | 2025-01-16 15:03:58 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:03:58.101948 | orchestrator | 2025-01-16 15:03:58 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:01.119853 | orchestrator | 2025-01-16 15:04:01 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:01.120963 | orchestrator | 2025-01-16 15:04:01 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:01.121361 | orchestrator | 2025-01-16 15:04:01 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:01.122680 | orchestrator | 2025-01-16 15:04:01 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:04.142084 | orchestrator | 2025-01-16 15:04:04 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:04.142369 | orchestrator | 2025-01-16 15:04:04 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:04.142885 | orchestrator | 2025-01-16 15:04:04 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:07.166402 | orchestrator | 2025-01-16 15:04:04 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:07.166596 | orchestrator | 2025-01-16 15:04:07 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:07.166620 | orchestrator | 2025-01-16 15:04:07 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:07.166635 | orchestrator | 2025-01-16 15:04:07 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:07.167062 | orchestrator | 2025-01-16 15:04:07 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:10.186745 | orchestrator | 2025-01-16 15:04:10 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:13.209107 | orchestrator | 2025-01-16 15:04:10 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:13.209209 | orchestrator | 2025-01-16 15:04:10 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:13.209220 | orchestrator | 2025-01-16 15:04:10 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:13.209242 | orchestrator | 2025-01-16 15:04:13 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:16.240305 | orchestrator | 2025-01-16 15:04:13 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:16.240393 | orchestrator | 2025-01-16 15:04:13 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:16.240401 | orchestrator | 2025-01-16 15:04:13 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:16.240419 | orchestrator | 2025-01-16 15:04:16 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:16.240584 | orchestrator | 2025-01-16 15:04:16 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:16.240604 | orchestrator | 2025-01-16 15:04:16 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:19.262763 | orchestrator | 2025-01-16 15:04:16 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:19.262864 | orchestrator | 2025-01-16 15:04:19 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:19.263031 | orchestrator | 2025-01-16 15:04:19 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:19.263049 | orchestrator | 2025-01-16 15:04:19 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:19.263176 | orchestrator | 2025-01-16 15:04:19 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:22.290114 | orchestrator | 2025-01-16 15:04:22 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:25.319068 | orchestrator | 2025-01-16 15:04:22 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:25.319208 | orchestrator | 2025-01-16 15:04:22 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:25.319243 | orchestrator | 2025-01-16 15:04:22 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:25.319320 | orchestrator | 2025-01-16 15:04:25 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:28.362220 | orchestrator | 2025-01-16 15:04:25 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:28.362477 | orchestrator | 2025-01-16 15:04:25 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:28.362498 | orchestrator | 2025-01-16 15:04:25 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:28.362525 | orchestrator | 2025-01-16 15:04:28 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:31.386252 | orchestrator | 2025-01-16 15:04:28 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:31.386363 | orchestrator | 2025-01-16 15:04:28 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:31.386377 | orchestrator | 2025-01-16 15:04:28 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:31.386401 | orchestrator | 2025-01-16 15:04:31 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:31.387556 | orchestrator | 2025-01-16 15:04:31 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:31.387592 | orchestrator | 2025-01-16 15:04:31 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:34.428499 | orchestrator | 2025-01-16 15:04:31 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:34.428594 | orchestrator | 2025-01-16 15:04:34 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:34.428800 | orchestrator | 2025-01-16 15:04:34 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:34.428815 | orchestrator | 2025-01-16 15:04:34 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:34.428874 | orchestrator | 2025-01-16 15:04:34 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:37.461490 | orchestrator | 2025-01-16 15:04:37 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:37.462118 | orchestrator | 2025-01-16 15:04:37 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:37.463036 | orchestrator | 2025-01-16 15:04:37 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:40.500476 | orchestrator | 2025-01-16 15:04:37 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:40.500699 | orchestrator | 2025-01-16 15:04:40 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:40.501543 | orchestrator | 2025-01-16 15:04:40 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:40.501919 | orchestrator | 2025-01-16 15:04:40 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:43.530894 | orchestrator | 2025-01-16 15:04:40 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:43.531040 | orchestrator | 2025-01-16 15:04:43 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:43.532096 | orchestrator | 2025-01-16 15:04:43 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:43.532841 | orchestrator | 2025-01-16 15:04:43 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:46.566998 | orchestrator | 2025-01-16 15:04:43 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:46.567140 | orchestrator | 2025-01-16 15:04:46 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:46.568272 | orchestrator | 2025-01-16 15:04:46 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:46.568326 | orchestrator | 2025-01-16 15:04:46 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:49.591572 | orchestrator | 2025-01-16 15:04:46 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:49.591728 | orchestrator | 2025-01-16 15:04:49 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:49.592856 | orchestrator | 2025-01-16 15:04:49 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:49.592880 | orchestrator | 2025-01-16 15:04:49 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:49.592894 | orchestrator | 2025-01-16 15:04:49 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:52.630358 | orchestrator | 2025-01-16 15:04:52 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:55.656904 | orchestrator | 2025-01-16 15:04:52 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state STARTED 2025-01-16 15:04:55.657029 | orchestrator | 2025-01-16 15:04:52 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:55.657150 | orchestrator | 2025-01-16 15:04:52 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:55.657173 | orchestrator | 2025-01-16 15:04:55 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:04:55.658528 | orchestrator | 2025-01-16 15:04:55 | INFO  | Task 6354ccb1-7afa-4063-99eb-fd395bbe0c66 is in state SUCCESS 2025-01-16 15:04:55.658579 | orchestrator | 2025-01-16 15:04:55.658588 | orchestrator | 2025-01-16 15:04:55.658596 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-01-16 15:04:55.658604 | orchestrator | 2025-01-16 15:04:55.658613 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-01-16 15:04:55.658664 | orchestrator | Thursday 16 January 2025 14:59:58 +0000 (0:00:03.016) 0:00:03.016 ****** 2025-01-16 15:04:55.658672 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:04:55.658682 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:04:55.658690 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:04:55.658698 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.658707 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.658715 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.658722 | orchestrator | 2025-01-16 15:04:55.658730 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-01-16 15:04:55.658739 | orchestrator | Thursday 16 January 2025 15:00:08 +0000 (0:00:10.312) 0:00:13.328 ****** 2025-01-16 15:04:55.658747 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.658756 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.658764 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.658772 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.658780 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.658787 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.658795 | orchestrator | 2025-01-16 15:04:55.658803 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-01-16 15:04:55.658811 | orchestrator | Thursday 16 January 2025 15:00:11 +0000 (0:00:03.019) 0:00:16.347 ****** 2025-01-16 15:04:55.658818 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.658826 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.658834 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.658841 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.658849 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.658856 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.658864 | orchestrator | 2025-01-16 15:04:55.658871 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-01-16 15:04:55.658879 | orchestrator | Thursday 16 January 2025 15:00:13 +0000 (0:00:02.151) 0:00:18.499 ****** 2025-01-16 15:04:55.658907 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:04:55.658915 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:04:55.658922 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:04:55.658930 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.658937 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.658945 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.658952 | orchestrator | 2025-01-16 15:04:55.658960 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-01-16 15:04:55.658968 | orchestrator | Thursday 16 January 2025 15:00:15 +0000 (0:00:02.036) 0:00:20.536 ****** 2025-01-16 15:04:55.658976 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:04:55.658983 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:04:55.658991 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.658998 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.659006 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:04:55.659013 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.659021 | orchestrator | 2025-01-16 15:04:55.659029 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-01-16 15:04:55.659043 | orchestrator | Thursday 16 January 2025 15:00:19 +0000 (0:00:03.947) 0:00:24.483 ****** 2025-01-16 15:04:55.659051 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:04:55.659058 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:04:55.659066 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:04:55.659073 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.659081 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.659089 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.659097 | orchestrator | 2025-01-16 15:04:55.659105 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-01-16 15:04:55.659113 | orchestrator | Thursday 16 January 2025 15:00:23 +0000 (0:00:04.193) 0:00:28.676 ****** 2025-01-16 15:04:55.659121 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.659129 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.659138 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.659146 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.659154 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.659162 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.659170 | orchestrator | 2025-01-16 15:04:55.659178 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-01-16 15:04:55.659186 | orchestrator | Thursday 16 January 2025 15:00:26 +0000 (0:00:02.356) 0:00:31.033 ****** 2025-01-16 15:04:55.659194 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.659203 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.659211 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.659219 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.659230 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.659239 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.659247 | orchestrator | 2025-01-16 15:04:55.659255 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-01-16 15:04:55.659263 | orchestrator | Thursday 16 January 2025 15:00:28 +0000 (0:00:02.429) 0:00:33.462 ****** 2025-01-16 15:04:55.659272 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-01-16 15:04:55.659280 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-01-16 15:04:55.659288 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.659296 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-01-16 15:04:55.659305 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-01-16 15:04:55.659313 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.659321 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-01-16 15:04:55.659329 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-01-16 15:04:55.659337 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.659350 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-01-16 15:04:55.659368 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-01-16 15:04:55.659377 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.659385 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-01-16 15:04:55.659394 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-01-16 15:04:55.659401 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.659409 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-01-16 15:04:55.659417 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-01-16 15:04:55.659425 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.659432 | orchestrator | 2025-01-16 15:04:55.659441 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-01-16 15:04:55.659448 | orchestrator | Thursday 16 January 2025 15:00:32 +0000 (0:00:03.874) 0:00:37.336 ****** 2025-01-16 15:04:55.659457 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.659464 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.659472 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.659480 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.659488 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.659496 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.659503 | orchestrator | 2025-01-16 15:04:55.659512 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-01-16 15:04:55.659521 | orchestrator | Thursday 16 January 2025 15:00:35 +0000 (0:00:03.614) 0:00:40.951 ****** 2025-01-16 15:04:55.659529 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:04:55.659536 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:04:55.659544 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:04:55.659552 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.659560 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.659567 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.659575 | orchestrator | 2025-01-16 15:04:55.659583 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-01-16 15:04:55.659591 | orchestrator | Thursday 16 January 2025 15:00:37 +0000 (0:00:01.906) 0:00:42.857 ****** 2025-01-16 15:04:55.659598 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.659606 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:04:55.659614 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.659641 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:04:55.659650 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:04:55.659657 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.659665 | orchestrator | 2025-01-16 15:04:55.659673 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-01-16 15:04:55.659681 | orchestrator | Thursday 16 January 2025 15:00:43 +0000 (0:00:05.562) 0:00:48.419 ****** 2025-01-16 15:04:55.659689 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.659697 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.659704 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.659712 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.659720 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.659728 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.659735 | orchestrator | 2025-01-16 15:04:55.659743 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-01-16 15:04:55.659751 | orchestrator | Thursday 16 January 2025 15:00:45 +0000 (0:00:02.075) 0:00:50.495 ****** 2025-01-16 15:04:55.659758 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.659766 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.659774 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.659782 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.659789 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.659802 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.659810 | orchestrator | 2025-01-16 15:04:55.659817 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-01-16 15:04:55.659826 | orchestrator | Thursday 16 January 2025 15:00:47 +0000 (0:00:02.254) 0:00:52.749 ****** 2025-01-16 15:04:55.659834 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.659841 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.659848 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.659856 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.659863 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.659871 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.659878 | orchestrator | 2025-01-16 15:04:55.659886 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-01-16 15:04:55.659893 | orchestrator | Thursday 16 January 2025 15:00:49 +0000 (0:00:02.045) 0:00:54.794 ****** 2025-01-16 15:04:55.659901 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-01-16 15:04:55.659908 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-01-16 15:04:55.659916 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.659923 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-01-16 15:04:55.659942 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-01-16 15:04:55.659951 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.659959 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-01-16 15:04:55.659975 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-01-16 15:04:55.659983 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.659990 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-01-16 15:04:55.659997 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-01-16 15:04:55.660004 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.660018 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-01-16 15:04:55.660025 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-01-16 15:04:55.660033 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.660041 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-01-16 15:04:55.660048 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-01-16 15:04:55.660056 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.660063 | orchestrator | 2025-01-16 15:04:55.660070 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-01-16 15:04:55.660088 | orchestrator | Thursday 16 January 2025 15:00:52 +0000 (0:00:02.364) 0:00:57.159 ****** 2025-01-16 15:04:55.660097 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.660108 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.660116 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.660124 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.660132 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.660139 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.660147 | orchestrator | 2025-01-16 15:04:55.660154 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-01-16 15:04:55.660163 | orchestrator | 2025-01-16 15:04:55.660171 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-01-16 15:04:55.660180 | orchestrator | Thursday 16 January 2025 15:00:54 +0000 (0:00:02.627) 0:00:59.786 ****** 2025-01-16 15:04:55.660187 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.660195 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.660203 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.660212 | orchestrator | 2025-01-16 15:04:55.660220 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-01-16 15:04:55.660228 | orchestrator | Thursday 16 January 2025 15:00:56 +0000 (0:00:01.807) 0:01:01.594 ****** 2025-01-16 15:04:55.660236 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.660244 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.660259 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.660268 | orchestrator | 2025-01-16 15:04:55.660275 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-01-16 15:04:55.660283 | orchestrator | Thursday 16 January 2025 15:00:58 +0000 (0:00:01.764) 0:01:03.359 ****** 2025-01-16 15:04:55.660291 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.660299 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.660307 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.660315 | orchestrator | 2025-01-16 15:04:55.660324 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-01-16 15:04:55.660332 | orchestrator | Thursday 16 January 2025 15:01:00 +0000 (0:00:01.743) 0:01:05.103 ****** 2025-01-16 15:04:55.660340 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.660348 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.660356 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.660363 | orchestrator | 2025-01-16 15:04:55.660371 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-01-16 15:04:55.660380 | orchestrator | Thursday 16 January 2025 15:01:01 +0000 (0:00:01.409) 0:01:06.513 ****** 2025-01-16 15:04:55.660388 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.660396 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.660404 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.660412 | orchestrator | 2025-01-16 15:04:55.660420 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-01-16 15:04:55.660428 | orchestrator | Thursday 16 January 2025 15:01:02 +0000 (0:00:01.322) 0:01:07.835 ****** 2025-01-16 15:04:55.660436 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:04:55.660444 | orchestrator | 2025-01-16 15:04:55.660452 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-01-16 15:04:55.660461 | orchestrator | Thursday 16 January 2025 15:01:04 +0000 (0:00:01.683) 0:01:09.519 ****** 2025-01-16 15:04:55.660469 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.660477 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.660485 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.660493 | orchestrator | 2025-01-16 15:04:55.660501 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-01-16 15:04:55.660508 | orchestrator | Thursday 16 January 2025 15:01:06 +0000 (0:00:02.431) 0:01:11.951 ****** 2025-01-16 15:04:55.660515 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.660523 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.660530 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.660538 | orchestrator | 2025-01-16 15:04:55.660545 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-01-16 15:04:55.660552 | orchestrator | Thursday 16 January 2025 15:01:08 +0000 (0:00:01.404) 0:01:13.355 ****** 2025-01-16 15:04:55.660560 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.660569 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.660578 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.660587 | orchestrator | 2025-01-16 15:04:55.660595 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-01-16 15:04:55.660604 | orchestrator | Thursday 16 January 2025 15:01:09 +0000 (0:00:01.466) 0:01:14.822 ****** 2025-01-16 15:04:55.660613 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.660637 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.660646 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.660655 | orchestrator | 2025-01-16 15:04:55.660663 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-01-16 15:04:55.660671 | orchestrator | Thursday 16 January 2025 15:01:12 +0000 (0:00:02.396) 0:01:17.218 ****** 2025-01-16 15:04:55.660679 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.660687 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.660696 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.660704 | orchestrator | 2025-01-16 15:04:55.660712 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-01-16 15:04:55.660726 | orchestrator | Thursday 16 January 2025 15:01:13 +0000 (0:00:01.320) 0:01:18.539 ****** 2025-01-16 15:04:55.660734 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.660743 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.660751 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.660759 | orchestrator | 2025-01-16 15:04:55.660767 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-01-16 15:04:55.660775 | orchestrator | Thursday 16 January 2025 15:01:15 +0000 (0:00:01.666) 0:01:20.206 ****** 2025-01-16 15:04:55.660783 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.660792 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.660800 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.660808 | orchestrator | 2025-01-16 15:04:55.660817 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-01-16 15:04:55.660825 | orchestrator | Thursday 16 January 2025 15:01:17 +0000 (0:00:02.476) 0:01:22.683 ****** 2025-01-16 15:04:55.660839 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-01-16 15:04:55.660849 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-01-16 15:04:55.660858 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-01-16 15:04:55.660866 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-01-16 15:04:55.660874 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-01-16 15:04:55.660882 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-01-16 15:04:55.660889 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-01-16 15:04:55.660897 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-01-16 15:04:55.660904 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-01-16 15:04:55.660910 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-01-16 15:04:55.660917 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-01-16 15:04:55.660925 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-01-16 15:04:55.660932 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.660940 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.660948 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.660956 | orchestrator | 2025-01-16 15:04:55.660963 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-01-16 15:04:55.660971 | orchestrator | Thursday 16 January 2025 15:02:00 +0000 (0:00:43.049) 0:02:05.733 ****** 2025-01-16 15:04:55.660978 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.660987 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.660995 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.661002 | orchestrator | 2025-01-16 15:04:55.661010 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-01-16 15:04:55.661022 | orchestrator | Thursday 16 January 2025 15:02:02 +0000 (0:00:01.332) 0:02:07.065 ****** 2025-01-16 15:04:55.661029 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.661043 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.661051 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.661058 | orchestrator | 2025-01-16 15:04:55.661066 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-01-16 15:04:55.661073 | orchestrator | Thursday 16 January 2025 15:02:03 +0000 (0:00:01.844) 0:02:08.910 ****** 2025-01-16 15:04:55.661081 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.661094 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.661102 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.661109 | orchestrator | 2025-01-16 15:04:55.661116 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-01-16 15:04:55.661123 | orchestrator | Thursday 16 January 2025 15:02:05 +0000 (0:00:01.730) 0:02:10.641 ****** 2025-01-16 15:04:55.661130 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.661138 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.661145 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.661152 | orchestrator | 2025-01-16 15:04:55.661160 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-01-16 15:04:55.661167 | orchestrator | Thursday 16 January 2025 15:02:20 +0000 (0:00:14.542) 0:02:25.184 ****** 2025-01-16 15:04:55.661174 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.661181 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.661188 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.661195 | orchestrator | 2025-01-16 15:04:55.661202 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-01-16 15:04:55.661210 | orchestrator | Thursday 16 January 2025 15:02:21 +0000 (0:00:01.491) 0:02:26.675 ****** 2025-01-16 15:04:55.661217 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.661224 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.661231 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.661239 | orchestrator | 2025-01-16 15:04:55.661246 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-01-16 15:04:55.661253 | orchestrator | Thursday 16 January 2025 15:02:23 +0000 (0:00:01.507) 0:02:28.183 ****** 2025-01-16 15:04:55.661260 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.661267 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.661275 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.661282 | orchestrator | 2025-01-16 15:04:55.661290 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-01-16 15:04:55.661297 | orchestrator | Thursday 16 January 2025 15:02:24 +0000 (0:00:01.499) 0:02:29.682 ****** 2025-01-16 15:04:55.661305 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.661313 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.661320 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.661327 | orchestrator | 2025-01-16 15:04:55.661335 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-01-16 15:04:55.661342 | orchestrator | Thursday 16 January 2025 15:02:26 +0000 (0:00:01.828) 0:02:31.511 ****** 2025-01-16 15:04:55.661356 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.661364 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.661371 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.661378 | orchestrator | 2025-01-16 15:04:55.661386 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-01-16 15:04:55.661393 | orchestrator | Thursday 16 January 2025 15:02:27 +0000 (0:00:01.363) 0:02:32.874 ****** 2025-01-16 15:04:55.661401 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.661408 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.661416 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.661423 | orchestrator | 2025-01-16 15:04:55.661431 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-01-16 15:04:55.661439 | orchestrator | Thursday 16 January 2025 15:02:29 +0000 (0:00:01.474) 0:02:34.349 ****** 2025-01-16 15:04:55.661446 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.661453 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.661461 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.661474 | orchestrator | 2025-01-16 15:04:55.661481 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-01-16 15:04:55.661488 | orchestrator | Thursday 16 January 2025 15:02:30 +0000 (0:00:01.247) 0:02:35.596 ****** 2025-01-16 15:04:55.661496 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.661504 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.661511 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.661519 | orchestrator | 2025-01-16 15:04:55.661527 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-01-16 15:04:55.661534 | orchestrator | Thursday 16 January 2025 15:02:32 +0000 (0:00:01.594) 0:02:37.191 ****** 2025-01-16 15:04:55.661542 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:04:55.661549 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:04:55.661556 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:04:55.661563 | orchestrator | 2025-01-16 15:04:55.661571 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-01-16 15:04:55.661579 | orchestrator | Thursday 16 January 2025 15:02:33 +0000 (0:00:01.686) 0:02:38.878 ****** 2025-01-16 15:04:55.661587 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.661594 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.661601 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.661608 | orchestrator | 2025-01-16 15:04:55.661615 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-01-16 15:04:55.661667 | orchestrator | Thursday 16 January 2025 15:02:34 +0000 (0:00:01.089) 0:02:39.967 ****** 2025-01-16 15:04:55.661675 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.661682 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.661690 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.661697 | orchestrator | 2025-01-16 15:04:55.661705 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-01-16 15:04:55.661712 | orchestrator | Thursday 16 January 2025 15:02:36 +0000 (0:00:01.207) 0:02:41.175 ****** 2025-01-16 15:04:55.661720 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.661727 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.661734 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.661742 | orchestrator | 2025-01-16 15:04:55.661749 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-01-16 15:04:55.661757 | orchestrator | Thursday 16 January 2025 15:02:37 +0000 (0:00:01.594) 0:02:42.770 ****** 2025-01-16 15:04:55.661764 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.661771 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.661778 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.661793 | orchestrator | 2025-01-16 15:04:55.661801 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-01-16 15:04:55.661812 | orchestrator | Thursday 16 January 2025 15:02:39 +0000 (0:00:01.248) 0:02:44.018 ****** 2025-01-16 15:04:55.661819 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-01-16 15:04:55.661827 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-01-16 15:04:55.661834 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-01-16 15:04:55.661842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-01-16 15:04:55.661850 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-01-16 15:04:55.661859 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-01-16 15:04:55.661866 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-01-16 15:04:55.661874 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-01-16 15:04:55.661882 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-01-16 15:04:55.661895 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-01-16 15:04:55.661903 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-01-16 15:04:55.661913 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-01-16 15:04:55.661920 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-01-16 15:04:55.661928 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-01-16 15:04:55.661935 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-01-16 15:04:55.661942 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-01-16 15:04:55.661956 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-01-16 15:04:55.661964 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-01-16 15:04:55.661971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-01-16 15:04:55.661978 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-01-16 15:04:55.661986 | orchestrator | 2025-01-16 15:04:55.661994 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-01-16 15:04:55.662001 | orchestrator | 2025-01-16 15:04:55.662008 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-01-16 15:04:55.662064 | orchestrator | Thursday 16 January 2025 15:02:42 +0000 (0:00:03.138) 0:02:47.157 ****** 2025-01-16 15:04:55.662076 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:04:55.662084 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:04:55.662092 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:04:55.662101 | orchestrator | 2025-01-16 15:04:55.662109 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-01-16 15:04:55.662117 | orchestrator | Thursday 16 January 2025 15:02:43 +0000 (0:00:01.286) 0:02:48.443 ****** 2025-01-16 15:04:55.662125 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:04:55.662133 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:04:55.662141 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:04:55.662148 | orchestrator | 2025-01-16 15:04:55.662156 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-01-16 15:04:55.662164 | orchestrator | Thursday 16 January 2025 15:02:44 +0000 (0:00:01.254) 0:02:49.698 ****** 2025-01-16 15:04:55.662171 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:04:55.662179 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:04:55.662193 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:04:55.662202 | orchestrator | 2025-01-16 15:04:55.662209 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-01-16 15:04:55.662217 | orchestrator | Thursday 16 January 2025 15:02:45 +0000 (0:00:01.177) 0:02:50.875 ****** 2025-01-16 15:04:55.662225 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:04:55.662233 | orchestrator | 2025-01-16 15:04:55.662241 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-01-16 15:04:55.662249 | orchestrator | Thursday 16 January 2025 15:02:47 +0000 (0:00:01.442) 0:02:52.317 ****** 2025-01-16 15:04:55.662257 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.662265 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.662273 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.662280 | orchestrator | 2025-01-16 15:04:55.662288 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-01-16 15:04:55.662296 | orchestrator | Thursday 16 January 2025 15:02:48 +0000 (0:00:00.991) 0:02:53.308 ****** 2025-01-16 15:04:55.662303 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.662311 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.662326 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.662334 | orchestrator | 2025-01-16 15:04:55.662341 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-01-16 15:04:55.662349 | orchestrator | Thursday 16 January 2025 15:02:49 +0000 (0:00:00.945) 0:02:54.253 ****** 2025-01-16 15:04:55.662357 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.662365 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.662373 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.662381 | orchestrator | 2025-01-16 15:04:55.662389 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-01-16 15:04:55.662397 | orchestrator | Thursday 16 January 2025 15:02:50 +0000 (0:00:00.924) 0:02:55.178 ****** 2025-01-16 15:04:55.662404 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:04:55.662412 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:04:55.662420 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:04:55.662428 | orchestrator | 2025-01-16 15:04:55.662437 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-01-16 15:04:55.662445 | orchestrator | Thursday 16 January 2025 15:02:51 +0000 (0:00:01.690) 0:02:56.869 ****** 2025-01-16 15:04:55.662453 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:04:55.662461 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:04:55.662469 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:04:55.662477 | orchestrator | 2025-01-16 15:04:55.662484 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-01-16 15:04:55.662492 | orchestrator | 2025-01-16 15:04:55.662500 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-01-16 15:04:55.662507 | orchestrator | Thursday 16 January 2025 15:02:58 +0000 (0:00:06.912) 0:03:03.781 ****** 2025-01-16 15:04:55.662514 | orchestrator | ok: [testbed-manager] 2025-01-16 15:04:55.662522 | orchestrator | 2025-01-16 15:04:55.662529 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-01-16 15:04:55.662536 | orchestrator | Thursday 16 January 2025 15:02:59 +0000 (0:00:01.193) 0:03:04.974 ****** 2025-01-16 15:04:55.662544 | orchestrator | changed: [testbed-manager] 2025-01-16 15:04:55.662551 | orchestrator | 2025-01-16 15:04:55.662559 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-01-16 15:04:55.662566 | orchestrator | Thursday 16 January 2025 15:03:00 +0000 (0:00:01.011) 0:03:05.986 ****** 2025-01-16 15:04:55.662574 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-01-16 15:04:55.662581 | orchestrator | 2025-01-16 15:04:55.662589 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-01-16 15:04:55.662599 | orchestrator | Thursday 16 January 2025 15:03:02 +0000 (0:00:01.233) 0:03:07.220 ****** 2025-01-16 15:04:55.662607 | orchestrator | changed: [testbed-manager] 2025-01-16 15:04:55.662615 | orchestrator | 2025-01-16 15:04:55.662666 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-01-16 15:04:55.662673 | orchestrator | Thursday 16 January 2025 15:03:03 +0000 (0:00:01.418) 0:03:08.638 ****** 2025-01-16 15:04:55.662681 | orchestrator | changed: [testbed-manager] 2025-01-16 15:04:55.662688 | orchestrator | 2025-01-16 15:04:55.662696 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-01-16 15:04:55.662712 | orchestrator | Thursday 16 January 2025 15:03:04 +0000 (0:00:01.077) 0:03:09.715 ****** 2025-01-16 15:04:55.662720 | orchestrator | changed: [testbed-manager -> localhost] 2025-01-16 15:04:55.662728 | orchestrator | 2025-01-16 15:04:55.662736 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-01-16 15:04:55.662743 | orchestrator | Thursday 16 January 2025 15:03:06 +0000 (0:00:01.578) 0:03:11.294 ****** 2025-01-16 15:04:55.662750 | orchestrator | changed: [testbed-manager -> localhost] 2025-01-16 15:04:55.662758 | orchestrator | 2025-01-16 15:04:55.662765 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-01-16 15:04:55.662773 | orchestrator | Thursday 16 January 2025 15:03:07 +0000 (0:00:01.019) 0:03:12.313 ****** 2025-01-16 15:04:55.662786 | orchestrator | changed: [testbed-manager] 2025-01-16 15:04:55.662794 | orchestrator | 2025-01-16 15:04:55.662801 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-01-16 15:04:55.662808 | orchestrator | Thursday 16 January 2025 15:03:08 +0000 (0:00:01.241) 0:03:13.555 ****** 2025-01-16 15:04:55.662816 | orchestrator | changed: [testbed-manager] 2025-01-16 15:04:55.662823 | orchestrator | 2025-01-16 15:04:55.662831 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-01-16 15:04:55.662838 | orchestrator | 2025-01-16 15:04:55.662845 | orchestrator | TASK [osism.commons.kubectl : Gather variables for each operating system] ****** 2025-01-16 15:04:55.662852 | orchestrator | Thursday 16 January 2025 15:03:09 +0000 (0:00:01.326) 0:03:14.881 ****** 2025-01-16 15:04:55.662860 | orchestrator | ok: [testbed-manager] 2025-01-16 15:04:55.662868 | orchestrator | 2025-01-16 15:04:55.662875 | orchestrator | TASK [osism.commons.kubectl : Include distribution specific install tasks] ***** 2025-01-16 15:04:55.662882 | orchestrator | Thursday 16 January 2025 15:03:10 +0000 (0:00:00.829) 0:03:15.710 ****** 2025-01-16 15:04:55.662890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-01-16 15:04:55.662898 | orchestrator | 2025-01-16 15:04:55.662906 | orchestrator | TASK [osism.commons.kubectl : Remove old architecture-dependent repository] **** 2025-01-16 15:04:55.662913 | orchestrator | Thursday 16 January 2025 15:03:11 +0000 (0:00:01.123) 0:03:16.834 ****** 2025-01-16 15:04:55.662921 | orchestrator | ok: [testbed-manager] 2025-01-16 15:04:55.662928 | orchestrator | 2025-01-16 15:04:55.662936 | orchestrator | TASK [osism.commons.kubectl : Install apt-transport-https package] ************* 2025-01-16 15:04:55.662943 | orchestrator | Thursday 16 January 2025 15:03:13 +0000 (0:00:01.180) 0:03:18.015 ****** 2025-01-16 15:04:55.662951 | orchestrator | ok: [testbed-manager] 2025-01-16 15:04:55.662958 | orchestrator | 2025-01-16 15:04:55.662965 | orchestrator | TASK [osism.commons.kubectl : Add repository gpg key] ************************** 2025-01-16 15:04:55.662973 | orchestrator | Thursday 16 January 2025 15:03:14 +0000 (0:00:01.632) 0:03:19.647 ****** 2025-01-16 15:04:55.662981 | orchestrator | changed: [testbed-manager] 2025-01-16 15:04:55.662988 | orchestrator | 2025-01-16 15:04:55.662996 | orchestrator | TASK [osism.commons.kubectl : Set permissions of gpg key] ********************** 2025-01-16 15:04:55.663003 | orchestrator | Thursday 16 January 2025 15:03:15 +0000 (0:00:01.322) 0:03:20.970 ****** 2025-01-16 15:04:55.663011 | orchestrator | ok: [testbed-manager] 2025-01-16 15:04:55.663018 | orchestrator | 2025-01-16 15:04:55.663026 | orchestrator | TASK [osism.commons.kubectl : Add repository Debian] *************************** 2025-01-16 15:04:55.663033 | orchestrator | Thursday 16 January 2025 15:03:17 +0000 (0:00:01.162) 0:03:22.132 ****** 2025-01-16 15:04:55.663041 | orchestrator | changed: [testbed-manager] 2025-01-16 15:04:55.663048 | orchestrator | 2025-01-16 15:04:55.663055 | orchestrator | TASK [osism.commons.kubectl : Install required packages] *********************** 2025-01-16 15:04:55.663063 | orchestrator | Thursday 16 January 2025 15:03:22 +0000 (0:00:04.943) 0:03:27.076 ****** 2025-01-16 15:04:55.663071 | orchestrator | changed: [testbed-manager] 2025-01-16 15:04:55.663078 | orchestrator | 2025-01-16 15:04:55.663085 | orchestrator | TASK [osism.commons.kubectl : Remove kubectl symlink] ************************** 2025-01-16 15:04:55.663092 | orchestrator | Thursday 16 January 2025 15:03:31 +0000 (0:00:09.327) 0:03:36.403 ****** 2025-01-16 15:04:55.663100 | orchestrator | ok: [testbed-manager] 2025-01-16 15:04:55.663107 | orchestrator | 2025-01-16 15:04:55.663115 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-01-16 15:04:55.663122 | orchestrator | 2025-01-16 15:04:55.663130 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-01-16 15:04:55.663138 | orchestrator | Thursday 16 January 2025 15:03:32 +0000 (0:00:01.323) 0:03:37.727 ****** 2025-01-16 15:04:55.663145 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.663152 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.663160 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.663172 | orchestrator | 2025-01-16 15:04:55.663183 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-01-16 15:04:55.663190 | orchestrator | Thursday 16 January 2025 15:03:34 +0000 (0:00:01.469) 0:03:39.197 ****** 2025-01-16 15:04:55.663198 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.663205 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.663213 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.663220 | orchestrator | 2025-01-16 15:04:55.663227 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-01-16 15:04:55.663234 | orchestrator | Thursday 16 January 2025 15:03:35 +0000 (0:00:01.044) 0:03:40.241 ****** 2025-01-16 15:04:55.663242 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:04:55.663250 | orchestrator | 2025-01-16 15:04:55.663257 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-01-16 15:04:55.663264 | orchestrator | Thursday 16 January 2025 15:03:36 +0000 (0:00:01.262) 0:03:41.503 ****** 2025-01-16 15:04:55.663272 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-01-16 15:04:55.663279 | orchestrator | 2025-01-16 15:04:55.663286 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-01-16 15:04:55.663293 | orchestrator | Thursday 16 January 2025 15:03:37 +0000 (0:00:01.215) 0:03:42.719 ****** 2025-01-16 15:04:55.663301 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:04:55.663309 | orchestrator | 2025-01-16 15:04:55.663322 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-01-16 15:04:55.663331 | orchestrator | Thursday 16 January 2025 15:03:38 +0000 (0:00:01.195) 0:03:43.915 ****** 2025-01-16 15:04:55.663339 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.663529 | orchestrator | 2025-01-16 15:04:55.663539 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-01-16 15:04:55.663548 | orchestrator | Thursday 16 January 2025 15:03:39 +0000 (0:00:00.924) 0:03:44.840 ****** 2025-01-16 15:04:55.663556 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:04:55.663564 | orchestrator | 2025-01-16 15:04:55.663573 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-01-16 15:04:55.663580 | orchestrator | Thursday 16 January 2025 15:03:41 +0000 (0:00:01.370) 0:03:46.210 ****** 2025-01-16 15:04:55.663588 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.663596 | orchestrator | 2025-01-16 15:04:55.663605 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-01-16 15:04:55.663613 | orchestrator | Thursday 16 January 2025 15:03:42 +0000 (0:00:00.810) 0:03:47.021 ****** 2025-01-16 15:04:55.663641 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.663648 | orchestrator | 2025-01-16 15:04:55.663656 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-01-16 15:04:55.663664 | orchestrator | Thursday 16 January 2025 15:03:42 +0000 (0:00:00.805) 0:03:47.826 ****** 2025-01-16 15:04:55.663671 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.663679 | orchestrator | 2025-01-16 15:04:55.663687 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-01-16 15:04:55.663695 | orchestrator | Thursday 16 January 2025 15:03:43 +0000 (0:00:00.858) 0:03:48.685 ****** 2025-01-16 15:04:55.663704 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.663712 | orchestrator | 2025-01-16 15:04:55.663721 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-01-16 15:04:55.663729 | orchestrator | Thursday 16 January 2025 15:03:44 +0000 (0:00:00.860) 0:03:49.546 ****** 2025-01-16 15:04:55.663737 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-01-16 15:04:55.663746 | orchestrator | 2025-01-16 15:04:55.663754 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-01-16 15:04:55.663763 | orchestrator | Thursday 16 January 2025 15:03:51 +0000 (0:00:07.146) 0:03:56.692 ****** 2025-01-16 15:04:55.663772 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-01-16 15:04:55.663788 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-01-16 15:04:55.663797 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-01-16 15:04:55.663805 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-01-16 15:04:55.663814 | orchestrator | 2025-01-16 15:04:55.663822 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-01-16 15:04:55.663885 | orchestrator | Thursday 16 January 2025 15:04:24 +0000 (0:00:32.384) 0:04:29.077 ****** 2025-01-16 15:04:55.663894 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:04:55.663902 | orchestrator | 2025-01-16 15:04:55.663911 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-01-16 15:04:55.663919 | orchestrator | Thursday 16 January 2025 15:04:25 +0000 (0:00:01.593) 0:04:30.671 ****** 2025-01-16 15:04:55.663928 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-01-16 15:04:55.663936 | orchestrator | 2025-01-16 15:04:55.663944 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-01-16 15:04:55.663952 | orchestrator | Thursday 16 January 2025 15:04:27 +0000 (0:00:01.436) 0:04:32.108 ****** 2025-01-16 15:04:55.663961 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-01-16 15:04:55.663969 | orchestrator | 2025-01-16 15:04:55.663978 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-01-16 15:04:55.663992 | orchestrator | Thursday 16 January 2025 15:04:28 +0000 (0:00:01.507) 0:04:33.615 ****** 2025-01-16 15:04:55.664000 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.664009 | orchestrator | 2025-01-16 15:04:55.664017 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-01-16 15:04:55.664024 | orchestrator | Thursday 16 January 2025 15:04:29 +0000 (0:00:00.830) 0:04:34.445 ****** 2025-01-16 15:04:55.664032 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-01-16 15:04:55.664041 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-01-16 15:04:55.664048 | orchestrator | 2025-01-16 15:04:55.664056 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-01-16 15:04:55.664063 | orchestrator | Thursday 16 January 2025 15:04:31 +0000 (0:00:01.579) 0:04:36.025 ****** 2025-01-16 15:04:55.664070 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.664078 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.664086 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.664094 | orchestrator | 2025-01-16 15:04:55.664102 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-01-16 15:04:55.664109 | orchestrator | Thursday 16 January 2025 15:04:32 +0000 (0:00:01.242) 0:04:37.268 ****** 2025-01-16 15:04:55.664117 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.664125 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.664136 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.664144 | orchestrator | 2025-01-16 15:04:55.664152 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-01-16 15:04:55.664160 | orchestrator | 2025-01-16 15:04:55.664168 | orchestrator | TASK [osism.commons.k9s : Gather variables for each operating system] ********** 2025-01-16 15:04:55.664176 | orchestrator | Thursday 16 January 2025 15:04:34 +0000 (0:00:01.948) 0:04:39.216 ****** 2025-01-16 15:04:55.664184 | orchestrator | ok: [testbed-manager] 2025-01-16 15:04:55.664192 | orchestrator | 2025-01-16 15:04:55.664200 | orchestrator | TASK [osism.commons.k9s : Include distribution specific install tasks] ********* 2025-01-16 15:04:55.664207 | orchestrator | Thursday 16 January 2025 15:04:35 +0000 (0:00:00.803) 0:04:40.020 ****** 2025-01-16 15:04:55.664222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-01-16 15:04:55.664230 | orchestrator | 2025-01-16 15:04:55.664239 | orchestrator | TASK [osism.commons.k9s : Install k9s packages] ******************************** 2025-01-16 15:04:55.664247 | orchestrator | Thursday 16 January 2025 15:04:36 +0000 (0:00:01.017) 0:04:41.038 ****** 2025-01-16 15:04:55.664264 | orchestrator | changed: [testbed-manager] 2025-01-16 15:04:55.664272 | orchestrator | 2025-01-16 15:04:55.664280 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-01-16 15:04:55.664287 | orchestrator | 2025-01-16 15:04:55.664295 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-01-16 15:04:55.664302 | orchestrator | Thursday 16 January 2025 15:04:40 +0000 (0:00:04.768) 0:04:45.806 ****** 2025-01-16 15:04:55.664310 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:04:55.664318 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:04:55.664326 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:04:55.664334 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:04:55.664342 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:04:55.664349 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:04:55.664357 | orchestrator | 2025-01-16 15:04:55.664365 | orchestrator | TASK [Manage labels] *********************************************************** 2025-01-16 15:04:55.664373 | orchestrator | Thursday 16 January 2025 15:04:42 +0000 (0:00:01.627) 0:04:47.434 ****** 2025-01-16 15:04:55.664381 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-01-16 15:04:55.664388 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-01-16 15:04:55.664396 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-01-16 15:04:55.664404 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-01-16 15:04:55.664412 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-01-16 15:04:55.664420 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-01-16 15:04:55.664428 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-01-16 15:04:55.664436 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-01-16 15:04:55.664444 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-01-16 15:04:55.664451 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-01-16 15:04:55.664459 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-01-16 15:04:55.664467 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-01-16 15:04:55.664478 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-01-16 15:04:55.664485 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-01-16 15:04:55.664492 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-01-16 15:04:55.664499 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-01-16 15:04:55.664507 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-01-16 15:04:55.664515 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-01-16 15:04:55.664522 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-01-16 15:04:55.664529 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-01-16 15:04:55.664537 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-01-16 15:04:55.664545 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-01-16 15:04:55.664552 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-01-16 15:04:55.664560 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-01-16 15:04:55.664567 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-01-16 15:04:55.664579 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-01-16 15:04:55.664586 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-01-16 15:04:55.664593 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-01-16 15:04:55.664601 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-01-16 15:04:55.664608 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-01-16 15:04:55.664615 | orchestrator | 2025-01-16 15:04:55.664642 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-01-16 15:04:55.664649 | orchestrator | Thursday 16 January 2025 15:04:48 +0000 (0:00:06.179) 0:04:53.613 ****** 2025-01-16 15:04:55.664657 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:55.664665 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:55.664672 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:55.664679 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:55.664687 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:55.664695 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:55.664702 | orchestrator | 2025-01-16 15:04:55.664710 | orchestrator | TASK [Manage taints] *********************************************************** 2025-01-16 15:04:55.664723 | orchestrator | Thursday 16 January 2025 15:04:50 +0000 (0:00:01.560) 0:04:55.174 ****** 2025-01-16 15:04:58.691475 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:04:58.691608 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:04:58.691711 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:04:58.691729 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:04:58.691744 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:04:58.691758 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:04:58.691774 | orchestrator | 2025-01-16 15:04:58.691790 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:04:58.691807 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:04:58.691824 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-01-16 15:04:58.691838 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-01-16 15:04:58.691853 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-01-16 15:04:58.691867 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-01-16 15:04:58.691882 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-01-16 15:04:58.691896 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-01-16 15:04:58.691909 | orchestrator | 2025-01-16 15:04:58.691924 | orchestrator | 2025-01-16 15:04:58.691938 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:04:58.691953 | orchestrator | Thursday 16 January 2025 15:04:52 +0000 (0:00:02.670) 0:04:57.844 ****** 2025-01-16 15:04:58.691968 | orchestrator | =============================================================================== 2025-01-16 15:04:58.691982 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.05s 2025-01-16 15:04:58.691997 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 32.38s 2025-01-16 15:04:58.692011 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.54s 2025-01-16 15:04:58.692053 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites -- 10.31s 2025-01-16 15:04:58.692070 | orchestrator | osism.commons.kubectl : Install required packages ----------------------- 9.33s 2025-01-16 15:04:58.692086 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 7.15s 2025-01-16 15:04:58.692102 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 6.91s 2025-01-16 15:04:58.692132 | orchestrator | Manage labels ----------------------------------------------------------- 6.18s 2025-01-16 15:04:58.692148 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.56s 2025-01-16 15:04:58.692163 | orchestrator | osism.commons.kubectl : Add repository Debian --------------------------- 4.94s 2025-01-16 15:04:58.692179 | orchestrator | osism.commons.k9s : Install k9s packages -------------------------------- 4.77s 2025-01-16 15:04:58.692196 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 4.19s 2025-01-16 15:04:58.692219 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 3.95s 2025-01-16 15:04:58.692326 | orchestrator | k3s_prereq : Set bridge-nf-call-iptables (just to be sure) -------------- 3.87s 2025-01-16 15:04:58.692343 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 3.61s 2025-01-16 15:04:58.692359 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.14s 2025-01-16 15:04:58.692376 | orchestrator | k3s_prereq : Set same timezone on every Server -------------------------- 3.02s 2025-01-16 15:04:58.692392 | orchestrator | Manage taints ----------------------------------------------------------- 2.67s 2025-01-16 15:04:58.692408 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.63s 2025-01-16 15:04:58.692423 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.48s 2025-01-16 15:04:58.692438 | orchestrator | 2025-01-16 15:04:55 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:04:58.692453 | orchestrator | 2025-01-16 15:04:55 | INFO  | Task 0bb27000-50f9-46f7-b437-d508e84e01da is in state STARTED 2025-01-16 15:04:58.692467 | orchestrator | 2025-01-16 15:04:55 | INFO  | Task 00f712bf-32a8-41d7-8326-5f0c171163e9 is in state STARTED 2025-01-16 15:04:58.692482 | orchestrator | 2025-01-16 15:04:55 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:04:58.692517 | orchestrator | 2025-01-16 15:04:58 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:01.720506 | orchestrator | 2025-01-16 15:04:58 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:01.720678 | orchestrator | 2025-01-16 15:04:58 | INFO  | Task 0bb27000-50f9-46f7-b437-d508e84e01da is in state STARTED 2025-01-16 15:05:01.720702 | orchestrator | 2025-01-16 15:04:58 | INFO  | Task 00f712bf-32a8-41d7-8326-5f0c171163e9 is in state STARTED 2025-01-16 15:05:01.720727 | orchestrator | 2025-01-16 15:04:58 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:01.720774 | orchestrator | 2025-01-16 15:05:01 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:04.741563 | orchestrator | 2025-01-16 15:05:01 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:04.741835 | orchestrator | 2025-01-16 15:05:01 | INFO  | Task 0bb27000-50f9-46f7-b437-d508e84e01da is in state STARTED 2025-01-16 15:05:04.741865 | orchestrator | 2025-01-16 15:05:01 | INFO  | Task 00f712bf-32a8-41d7-8326-5f0c171163e9 is in state STARTED 2025-01-16 15:05:04.741881 | orchestrator | 2025-01-16 15:05:01 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:04.741915 | orchestrator | 2025-01-16 15:05:04 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:04.742333 | orchestrator | 2025-01-16 15:05:04 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:04.742424 | orchestrator | 2025-01-16 15:05:04 | INFO  | Task 0bb27000-50f9-46f7-b437-d508e84e01da is in state SUCCESS 2025-01-16 15:05:04.742460 | orchestrator | 2025-01-16 15:05:04 | INFO  | Task 00f712bf-32a8-41d7-8326-5f0c171163e9 is in state STARTED 2025-01-16 15:05:07.783005 | orchestrator | 2025-01-16 15:05:04 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:07.783106 | orchestrator | 2025-01-16 15:05:07 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:10.803610 | orchestrator | 2025-01-16 15:05:07 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:10.803775 | orchestrator | 2025-01-16 15:05:07 | INFO  | Task 00f712bf-32a8-41d7-8326-5f0c171163e9 is in state STARTED 2025-01-16 15:05:10.803790 | orchestrator | 2025-01-16 15:05:07 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:10.803815 | orchestrator | 2025-01-16 15:05:10 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:10.804316 | orchestrator | 2025-01-16 15:05:10 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:10.804452 | orchestrator | 2025-01-16 15:05:10 | INFO  | Task 00f712bf-32a8-41d7-8326-5f0c171163e9 is in state SUCCESS 2025-01-16 15:05:13.824737 | orchestrator | 2025-01-16 15:05:10 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:13.824873 | orchestrator | 2025-01-16 15:05:13 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:16.845796 | orchestrator | 2025-01-16 15:05:13 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:16.845909 | orchestrator | 2025-01-16 15:05:13 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:16.845941 | orchestrator | 2025-01-16 15:05:16 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:16.846174 | orchestrator | 2025-01-16 15:05:16 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:19.868480 | orchestrator | 2025-01-16 15:05:16 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:19.868608 | orchestrator | 2025-01-16 15:05:19 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:19.868940 | orchestrator | 2025-01-16 15:05:19 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:22.892451 | orchestrator | 2025-01-16 15:05:19 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:22.892586 | orchestrator | 2025-01-16 15:05:22 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:25.912086 | orchestrator | 2025-01-16 15:05:22 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:25.912202 | orchestrator | 2025-01-16 15:05:22 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:25.912233 | orchestrator | 2025-01-16 15:05:25 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:28.931326 | orchestrator | 2025-01-16 15:05:25 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:28.931456 | orchestrator | 2025-01-16 15:05:25 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:28.931503 | orchestrator | 2025-01-16 15:05:28 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:31.947602 | orchestrator | 2025-01-16 15:05:28 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:31.947697 | orchestrator | 2025-01-16 15:05:28 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:31.947714 | orchestrator | 2025-01-16 15:05:31 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:34.972511 | orchestrator | 2025-01-16 15:05:31 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state STARTED 2025-01-16 15:05:34.972690 | orchestrator | 2025-01-16 15:05:31 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:34.972732 | orchestrator | 2025-01-16 15:05:34 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:34.976128 | orchestrator | 2025-01-16 15:05:34 | INFO  | Task 31f2197f-46b6-4593-b320-01070af0c657 is in state SUCCESS 2025-01-16 15:05:34.982220 | orchestrator | 2025-01-16 15:05:34.982312 | orchestrator | 2025-01-16 15:05:34.982331 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-01-16 15:05:34.982347 | orchestrator | 2025-01-16 15:05:34.982361 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-01-16 15:05:34.982376 | orchestrator | Thursday 16 January 2025 15:04:57 +0000 (0:00:02.006) 0:00:02.006 ****** 2025-01-16 15:05:34.982390 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-01-16 15:05:34.982404 | orchestrator | 2025-01-16 15:05:34.982419 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-01-16 15:05:34.982433 | orchestrator | Thursday 16 January 2025 15:04:59 +0000 (0:00:01.612) 0:00:03.618 ****** 2025-01-16 15:05:34.982447 | orchestrator | changed: [testbed-manager] 2025-01-16 15:05:34.982461 | orchestrator | 2025-01-16 15:05:34.982476 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-01-16 15:05:34.982490 | orchestrator | Thursday 16 January 2025 15:05:00 +0000 (0:00:01.681) 0:00:05.299 ****** 2025-01-16 15:05:34.982505 | orchestrator | changed: [testbed-manager] 2025-01-16 15:05:34.982519 | orchestrator | 2025-01-16 15:05:34.982533 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:05:34.982547 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:05:34.982563 | orchestrator | 2025-01-16 15:05:34.982577 | orchestrator | 2025-01-16 15:05:34.982591 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:05:34.982632 | orchestrator | Thursday 16 January 2025 15:05:02 +0000 (0:00:01.382) 0:00:06.682 ****** 2025-01-16 15:05:34.982652 | orchestrator | =============================================================================== 2025-01-16 15:05:34.982667 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.68s 2025-01-16 15:05:34.982681 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.61s 2025-01-16 15:05:34.982695 | orchestrator | Change server address in the kubeconfig file ---------------------------- 1.38s 2025-01-16 15:05:34.982709 | orchestrator | 2025-01-16 15:05:34.982723 | orchestrator | 2025-01-16 15:05:34.982738 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-01-16 15:05:34.982751 | orchestrator | 2025-01-16 15:05:34.982765 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-01-16 15:05:34.982780 | orchestrator | Thursday 16 January 2025 15:04:57 +0000 (0:00:02.116) 0:00:02.116 ****** 2025-01-16 15:05:34.982794 | orchestrator | ok: [testbed-manager] 2025-01-16 15:05:34.982808 | orchestrator | 2025-01-16 15:05:34.982822 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-01-16 15:05:34.982836 | orchestrator | Thursday 16 January 2025 15:04:59 +0000 (0:00:01.297) 0:00:03.413 ****** 2025-01-16 15:05:34.982850 | orchestrator | ok: [testbed-manager] 2025-01-16 15:05:34.982864 | orchestrator | 2025-01-16 15:05:34.982878 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-01-16 15:05:34.982911 | orchestrator | Thursday 16 January 2025 15:05:00 +0000 (0:00:01.111) 0:00:04.525 ****** 2025-01-16 15:05:34.982950 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-01-16 15:05:34.982965 | orchestrator | 2025-01-16 15:05:34.982979 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-01-16 15:05:34.982993 | orchestrator | Thursday 16 January 2025 15:05:01 +0000 (0:00:01.535) 0:00:06.060 ****** 2025-01-16 15:05:34.983008 | orchestrator | changed: [testbed-manager] 2025-01-16 15:05:34.983022 | orchestrator | 2025-01-16 15:05:34.983036 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-01-16 15:05:34.983050 | orchestrator | Thursday 16 January 2025 15:05:03 +0000 (0:00:01.425) 0:00:07.485 ****** 2025-01-16 15:05:34.983064 | orchestrator | changed: [testbed-manager] 2025-01-16 15:05:34.983078 | orchestrator | 2025-01-16 15:05:34.983092 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-01-16 15:05:34.983106 | orchestrator | Thursday 16 January 2025 15:05:04 +0000 (0:00:01.058) 0:00:08.543 ****** 2025-01-16 15:05:34.983120 | orchestrator | changed: [testbed-manager -> localhost] 2025-01-16 15:05:34.983134 | orchestrator | 2025-01-16 15:05:34.983148 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-01-16 15:05:34.983162 | orchestrator | Thursday 16 January 2025 15:05:05 +0000 (0:00:01.466) 0:00:10.010 ****** 2025-01-16 15:05:34.983176 | orchestrator | changed: [testbed-manager -> localhost] 2025-01-16 15:05:34.983190 | orchestrator | 2025-01-16 15:05:34.983204 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-01-16 15:05:34.983218 | orchestrator | Thursday 16 January 2025 15:05:06 +0000 (0:00:00.997) 0:00:11.008 ****** 2025-01-16 15:05:34.983231 | orchestrator | ok: [testbed-manager] 2025-01-16 15:05:34.983246 | orchestrator | 2025-01-16 15:05:34.983260 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-01-16 15:05:34.983274 | orchestrator | Thursday 16 January 2025 15:05:07 +0000 (0:00:01.025) 0:00:12.033 ****** 2025-01-16 15:05:34.983288 | orchestrator | ok: [testbed-manager] 2025-01-16 15:05:34.983302 | orchestrator | 2025-01-16 15:05:34.983316 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:05:34.983331 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:05:34.983345 | orchestrator | 2025-01-16 15:05:34.983359 | orchestrator | 2025-01-16 15:05:34.983373 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:05:34.983387 | orchestrator | Thursday 16 January 2025 15:05:09 +0000 (0:00:01.318) 0:00:13.352 ****** 2025-01-16 15:05:34.983401 | orchestrator | =============================================================================== 2025-01-16 15:05:34.983415 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.54s 2025-01-16 15:05:34.983429 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.47s 2025-01-16 15:05:34.983443 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.43s 2025-01-16 15:05:34.983465 | orchestrator | Enable kubectl command line completion ---------------------------------- 1.32s 2025-01-16 15:05:34.983480 | orchestrator | Get home directory of operator user ------------------------------------- 1.30s 2025-01-16 15:05:34.983494 | orchestrator | Create .kube directory -------------------------------------------------- 1.11s 2025-01-16 15:05:34.983508 | orchestrator | Change server address in the kubeconfig --------------------------------- 1.06s 2025-01-16 15:05:34.983522 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 1.03s 2025-01-16 15:05:34.983536 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.00s 2025-01-16 15:05:34.983550 | orchestrator | 2025-01-16 15:05:34.983564 | orchestrator | 2025-01-16 15:05:34.983578 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:05:34.983592 | orchestrator | 2025-01-16 15:05:34.983661 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:05:34.983679 | orchestrator | Thursday 16 January 2025 15:00:34 +0000 (0:00:00.655) 0:00:00.655 ****** 2025-01-16 15:05:34.983703 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:34.983718 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:34.983732 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:34.983746 | orchestrator | 2025-01-16 15:05:34.983761 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:05:34.983775 | orchestrator | Thursday 16 January 2025 15:00:35 +0000 (0:00:00.824) 0:00:01.480 ****** 2025-01-16 15:05:34.983789 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-01-16 15:05:34.983803 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-01-16 15:05:34.983818 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-01-16 15:05:34.983832 | orchestrator | 2025-01-16 15:05:34.983846 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-01-16 15:05:34.983860 | orchestrator | 2025-01-16 15:05:34.983881 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-01-16 15:05:34.983895 | orchestrator | Thursday 16 January 2025 15:00:35 +0000 (0:00:00.643) 0:00:02.123 ****** 2025-01-16 15:05:34.983910 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.983924 | orchestrator | 2025-01-16 15:05:34.983939 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-01-16 15:05:34.983952 | orchestrator | Thursday 16 January 2025 15:00:37 +0000 (0:00:01.481) 0:00:03.605 ****** 2025-01-16 15:05:34.983967 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:34.983981 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:34.983995 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:34.984009 | orchestrator | 2025-01-16 15:05:34.984023 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-01-16 15:05:34.984037 | orchestrator | Thursday 16 January 2025 15:00:38 +0000 (0:00:00.844) 0:00:04.449 ****** 2025-01-16 15:05:34.984052 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.984065 | orchestrator | 2025-01-16 15:05:34.984080 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-01-16 15:05:34.984094 | orchestrator | Thursday 16 January 2025 15:00:39 +0000 (0:00:01.070) 0:00:05.519 ****** 2025-01-16 15:05:34.984108 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:34.984122 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:34.984136 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:34.984150 | orchestrator | 2025-01-16 15:05:34.984164 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-01-16 15:05:34.984178 | orchestrator | Thursday 16 January 2025 15:00:40 +0000 (0:00:00.914) 0:00:06.434 ****** 2025-01-16 15:05:34.984192 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-01-16 15:05:34.984207 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-01-16 15:05:34.984221 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-01-16 15:05:34.984239 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-01-16 15:05:34.984253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-01-16 15:05:34.984267 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-01-16 15:05:34.984282 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-01-16 15:05:34.984296 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-01-16 15:05:34.984310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-01-16 15:05:34.984325 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-01-16 15:05:34.984339 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-01-16 15:05:34.984360 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-01-16 15:05:34.984376 | orchestrator | 2025-01-16 15:05:34.984391 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-01-16 15:05:34.984405 | orchestrator | Thursday 16 January 2025 15:00:42 +0000 (0:00:02.662) 0:00:09.097 ****** 2025-01-16 15:05:34.984419 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-01-16 15:05:34.984433 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-01-16 15:05:34.984467 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-01-16 15:05:34.984481 | orchestrator | 2025-01-16 15:05:34.984496 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-01-16 15:05:34.984517 | orchestrator | Thursday 16 January 2025 15:00:44 +0000 (0:00:01.495) 0:00:10.592 ****** 2025-01-16 15:05:34.984531 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-01-16 15:05:34.984545 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-01-16 15:05:34.984560 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-01-16 15:05:34.984574 | orchestrator | 2025-01-16 15:05:34.984588 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-01-16 15:05:34.984632 | orchestrator | Thursday 16 January 2025 15:00:46 +0000 (0:00:02.095) 0:00:12.688 ****** 2025-01-16 15:05:34.984651 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-01-16 15:05:34.984665 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.984679 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-01-16 15:05:34.984694 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.984708 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-01-16 15:05:34.984722 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.984736 | orchestrator | 2025-01-16 15:05:34.984767 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-01-16 15:05:34.984783 | orchestrator | Thursday 16 January 2025 15:00:48 +0000 (0:00:01.666) 0:00:14.354 ****** 2025-01-16 15:05:34.984798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.984818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.984834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.984857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.984872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.984897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.984913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.984929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.984944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.984959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.984981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.984996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.985010 | orchestrator | 2025-01-16 15:05:34.985030 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-01-16 15:05:34.985045 | orchestrator | Thursday 16 January 2025 15:00:51 +0000 (0:00:03.678) 0:00:18.033 ****** 2025-01-16 15:05:34.985059 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:05:34.985073 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:05:34.985088 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:05:34.985102 | orchestrator | 2025-01-16 15:05:34.985116 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-01-16 15:05:34.985130 | orchestrator | Thursday 16 January 2025 15:00:55 +0000 (0:00:04.049) 0:00:22.083 ****** 2025-01-16 15:05:34.985144 | orchestrator | skipping: [testbed-node-0] => (item=users)  2025-01-16 15:05:34.985159 | orchestrator | skipping: [testbed-node-2] => (item=users)  2025-01-16 15:05:34.985173 | orchestrator | skipping: [testbed-node-1] => (item=users)  2025-01-16 15:05:34.985187 | orchestrator | skipping: [testbed-node-2] => (item=rules)  2025-01-16 15:05:34.985201 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.985216 | orchestrator | skipping: [testbed-node-0] => (item=rules)  2025-01-16 15:05:34.985230 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.985244 | orchestrator | skipping: [testbed-node-1] => (item=rules)  2025-01-16 15:05:34.985258 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.985272 | orchestrator | 2025-01-16 15:05:34.985362 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-01-16 15:05:34.985379 | orchestrator | Thursday 16 January 2025 15:00:57 +0000 (0:00:02.058) 0:00:24.141 ****** 2025-01-16 15:05:34.985394 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:05:34.985408 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:05:34.985422 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:05:34.985436 | orchestrator | 2025-01-16 15:05:34.985451 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-01-16 15:05:34.985465 | orchestrator | Thursday 16 January 2025 15:00:59 +0000 (0:00:01.388) 0:00:25.530 ****** 2025-01-16 15:05:34.985479 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.985500 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.985515 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.985529 | orchestrator | 2025-01-16 15:05:34.985543 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-01-16 15:05:34.985564 | orchestrator | Thursday 16 January 2025 15:01:00 +0000 (0:00:01.391) 0:00:26.921 ****** 2025-01-16 15:05:34.985580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-01-16 15:05:34.985595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-01-16 15:05:34.985632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-01-16 15:05:34.985656 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-01-16 15:05:34.985671 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-01-16 15:05:34.985686 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-01-16 15:05:34.985707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.985722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.985737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.985752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.985774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.985789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.985803 | orchestrator | 2025-01-16 15:05:34.985817 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-01-16 15:05:34.985832 | orchestrator | Thursday 16 January 2025 15:01:05 +0000 (0:00:04.855) 0:00:31.777 ****** 2025-01-16 15:05:34.985846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.985867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.985882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.985897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.985917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.985932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.985947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.985971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.985986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.986006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.986087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.986113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.986129 | orchestrator | 2025-01-16 15:05:34.986143 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-01-16 15:05:34.986168 | orchestrator | Thursday 16 January 2025 15:01:11 +0000 (0:00:06.125) 0:00:37.902 ****** 2025-01-16 15:05:34.986183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.986206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.986221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.986236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.986251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.986272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.986287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.986309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.986323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.986338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.986353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.986367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.986382 | orchestrator | 2025-01-16 15:05:34.986397 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-01-16 15:05:34.986411 | orchestrator | Thursday 16 January 2025 15:01:14 +0000 (0:00:02.576) 0:00:40.478 ****** 2025-01-16 15:05:34.986425 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-01-16 15:05:34.986439 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-01-16 15:05:34.986463 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-01-16 15:05:34.986498 | orchestrator | 2025-01-16 15:05:34.986515 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-01-16 15:05:34.986529 | orchestrator | Thursday 16 January 2025 15:01:18 +0000 (0:00:03.977) 0:00:44.456 ****** 2025-01-16 15:05:34.986543 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-01-16 15:05:34.986558 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.986572 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-01-16 15:05:34.986586 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.986601 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-01-16 15:05:34.986645 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.986671 | orchestrator | 2025-01-16 15:05:34.986695 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-01-16 15:05:34.986711 | orchestrator | Thursday 16 January 2025 15:01:20 +0000 (0:00:02.548) 0:00:47.004 ****** 2025-01-16 15:05:34.986725 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.986740 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.986754 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.986768 | orchestrator | 2025-01-16 15:05:34.986783 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-01-16 15:05:34.986797 | orchestrator | Thursday 16 January 2025 15:01:22 +0000 (0:00:02.045) 0:00:49.050 ****** 2025-01-16 15:05:34.986811 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-01-16 15:05:34.986826 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-01-16 15:05:34.986840 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-01-16 15:05:34.986854 | orchestrator | 2025-01-16 15:05:34.986868 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-01-16 15:05:34.986883 | orchestrator | Thursday 16 January 2025 15:01:24 +0000 (0:00:01.855) 0:00:50.906 ****** 2025-01-16 15:05:34.986897 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-01-16 15:05:34.986911 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-01-16 15:05:34.986925 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-01-16 15:05:34.986940 | orchestrator | 2025-01-16 15:05:34.986955 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-01-16 15:05:34.986969 | orchestrator | Thursday 16 January 2025 15:01:26 +0000 (0:00:01.874) 0:00:52.781 ****** 2025-01-16 15:05:34.986983 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-01-16 15:05:34.986997 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-01-16 15:05:34.987011 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-01-16 15:05:34.987026 | orchestrator | 2025-01-16 15:05:34.987040 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-01-16 15:05:34.987054 | orchestrator | Thursday 16 January 2025 15:01:28 +0000 (0:00:01.419) 0:00:54.201 ****** 2025-01-16 15:05:34.987068 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-01-16 15:05:34.987082 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-01-16 15:05:34.987111 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-01-16 15:05:34.987136 | orchestrator | 2025-01-16 15:05:34.987151 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-01-16 15:05:34.987165 | orchestrator | Thursday 16 January 2025 15:01:29 +0000 (0:00:01.688) 0:00:55.889 ****** 2025-01-16 15:05:34.987179 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.987201 | orchestrator | 2025-01-16 15:05:34.987216 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-01-16 15:05:34.987230 | orchestrator | Thursday 16 January 2025 15:01:30 +0000 (0:00:00.499) 0:00:56.389 ****** 2025-01-16 15:05:34.987245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.987275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.987291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.987306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.987321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.987335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.987350 | orchestrator | 2025-01-16 15:05:34.987365 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-01-16 15:05:34.987379 | orchestrator | Thursday 16 January 2025 15:01:31 +0000 (0:00:01.611) 0:00:58.000 ****** 2025-01-16 15:05:34.987401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-01-16 15:05:34.987420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.987435 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.987458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-01-16 15:05:34.987473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.987488 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.987502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-01-16 15:05:34.987517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.987532 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.987546 | orchestrator | 2025-01-16 15:05:34.987561 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-01-16 15:05:34.987575 | orchestrator | Thursday 16 January 2025 15:01:32 +0000 (0:00:00.589) 0:00:58.590 ****** 2025-01-16 15:05:34.987590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-01-16 15:05:34.987643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.987660 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.987674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-01-16 15:05:34.987697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.987713 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.987727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-01-16 15:05:34.987742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-01-16 15:05:34.987757 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.987771 | orchestrator | 2025-01-16 15:05:34.987785 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-01-16 15:05:34.987799 | orchestrator | Thursday 16 January 2025 15:01:33 +0000 (0:00:00.995) 0:00:59.585 ****** 2025-01-16 15:05:34.987813 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-01-16 15:05:34.987828 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-01-16 15:05:34.987849 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-01-16 15:05:34.987871 | orchestrator | 2025-01-16 15:05:34.987885 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-01-16 15:05:34.987900 | orchestrator | Thursday 16 January 2025 15:01:34 +0000 (0:00:01.381) 0:01:00.967 ****** 2025-01-16 15:05:34.987914 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-01-16 15:05:34.987928 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.987943 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-01-16 15:05:34.987957 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.987971 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-01-16 15:05:34.987985 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.987999 | orchestrator | 2025-01-16 15:05:34.988013 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-01-16 15:05:34.988027 | orchestrator | Thursday 16 January 2025 15:01:35 +0000 (0:00:00.844) 0:01:01.811 ****** 2025-01-16 15:05:34.988041 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-01-16 15:05:34.988056 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-01-16 15:05:34.988070 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-01-16 15:05:34.988084 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-01-16 15:05:34.988098 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.988112 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-01-16 15:05:34.988127 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.988141 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-01-16 15:05:34.988155 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.988169 | orchestrator | 2025-01-16 15:05:34.988184 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-01-16 15:05:34.988198 | orchestrator | Thursday 16 January 2025 15:01:37 +0000 (0:00:01.475) 0:01:03.286 ****** 2025-01-16 15:05:34.988231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.988247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.988262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.988284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-01-16 15:05:34.988299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.988314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-01-16 15:05:34.988329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.988350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.988365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.988387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.988408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-01-16 15:05:34.988423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116', '__omit_place_holder__b009d7c1724a84233a8cec868752ca402e38d116'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-01-16 15:05:34.988438 | orchestrator | 2025-01-16 15:05:34.988452 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-01-16 15:05:34.988471 | orchestrator | Thursday 16 January 2025 15:01:39 +0000 (0:00:01.969) 0:01:05.256 ****** 2025-01-16 15:05:34.988486 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.988500 | orchestrator | 2025-01-16 15:05:34.988515 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-01-16 15:05:34.988529 | orchestrator | Thursday 16 January 2025 15:01:39 +0000 (0:00:00.532) 0:01:05.789 ****** 2025-01-16 15:05:34.988550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-01-16 15:05:34.988575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-01-16 15:05:34.988597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-01-16 15:05:34.988695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-01-16 15:05:34.988711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-01-16 15:05:34.988770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-01-16 15:05:34.988784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988810 | orchestrator | 2025-01-16 15:05:34.988823 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-01-16 15:05:34.988836 | orchestrator | Thursday 16 January 2025 15:01:43 +0000 (0:00:03.778) 0:01:09.568 ****** 2025-01-16 15:05:34.988849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-01-16 15:05:34.988869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-01-16 15:05:34.988888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988920 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.988934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-01-16 15:05:34.988947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-01-16 15:05:34.988960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.988998 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.989015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-01-16 15:05:34.989029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-01-16 15:05:34.989042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989069 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.989082 | orchestrator | 2025-01-16 15:05:34.989095 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-01-16 15:05:34.989108 | orchestrator | Thursday 16 January 2025 15:01:44 +0000 (0:00:00.783) 0:01:10.351 ****** 2025-01-16 15:05:34.989121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-01-16 15:05:34.989134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-01-16 15:05:34.989146 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.989159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-01-16 15:05:34.989177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-01-16 15:05:34.989189 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.989202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-01-16 15:05:34.989219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-01-16 15:05:34.989232 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.989245 | orchestrator | 2025-01-16 15:05:34.989258 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-01-16 15:05:34.989270 | orchestrator | Thursday 16 January 2025 15:01:45 +0000 (0:00:01.104) 0:01:11.456 ****** 2025-01-16 15:05:34.989283 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.989295 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.989308 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.989320 | orchestrator | 2025-01-16 15:05:34.989332 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-01-16 15:05:34.989369 | orchestrator | Thursday 16 January 2025 15:01:45 +0000 (0:00:00.258) 0:01:11.715 ****** 2025-01-16 15:05:34.989383 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.989395 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.989407 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.989420 | orchestrator | 2025-01-16 15:05:34.989433 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-01-16 15:05:34.989445 | orchestrator | Thursday 16 January 2025 15:01:46 +0000 (0:00:00.716) 0:01:12.431 ****** 2025-01-16 15:05:34.989458 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.989471 | orchestrator | 2025-01-16 15:05:34.989484 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-01-16 15:05:34.989496 | orchestrator | Thursday 16 January 2025 15:01:46 +0000 (0:00:00.592) 0:01:13.024 ****** 2025-01-16 15:05:34.989510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.989530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.989585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.989667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989721 | orchestrator | 2025-01-16 15:05:34.989742 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-01-16 15:05:34.989763 | orchestrator | Thursday 16 January 2025 15:01:50 +0000 (0:00:03.598) 0:01:16.623 ****** 2025-01-16 15:05:34.989795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.989818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989865 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.989887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.989911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.989946 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.989980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.989995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.990045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.990061 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.990074 | orchestrator | 2025-01-16 15:05:34.990087 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-01-16 15:05:34.990100 | orchestrator | Thursday 16 January 2025 15:01:50 +0000 (0:00:00.495) 0:01:17.119 ****** 2025-01-16 15:05:34.990113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-01-16 15:05:34.990126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-01-16 15:05:34.990140 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.990152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-01-16 15:05:34.990165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-01-16 15:05:34.990201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-01-16 15:05:34.990216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-01-16 15:05:34.990229 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.990242 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.990254 | orchestrator | 2025-01-16 15:05:34.990267 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-01-16 15:05:34.990280 | orchestrator | Thursday 16 January 2025 15:01:51 +0000 (0:00:00.878) 0:01:17.998 ****** 2025-01-16 15:05:34.990292 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.990305 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.990317 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.990330 | orchestrator | 2025-01-16 15:05:34.990342 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-01-16 15:05:34.990355 | orchestrator | Thursday 16 January 2025 15:01:52 +0000 (0:00:00.298) 0:01:18.296 ****** 2025-01-16 15:05:34.990367 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.990380 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.990392 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.990405 | orchestrator | 2025-01-16 15:05:34.990418 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-01-16 15:05:34.990430 | orchestrator | Thursday 16 January 2025 15:01:52 +0000 (0:00:00.876) 0:01:19.173 ****** 2025-01-16 15:05:34.990443 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.990455 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.990467 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.990480 | orchestrator | 2025-01-16 15:05:34.990492 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-01-16 15:05:34.990511 | orchestrator | Thursday 16 January 2025 15:01:53 +0000 (0:00:00.194) 0:01:19.368 ****** 2025-01-16 15:05:34.990523 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.990536 | orchestrator | 2025-01-16 15:05:34.990548 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-01-16 15:05:34.990560 | orchestrator | Thursday 16 January 2025 15:01:53 +0000 (0:00:00.661) 0:01:20.029 ****** 2025-01-16 15:05:34.990573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-01-16 15:05:34.990597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-01-16 15:05:34.990686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-01-16 15:05:34.990703 | orchestrator | 2025-01-16 15:05:34.990716 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-01-16 15:05:34.990733 | orchestrator | Thursday 16 January 2025 15:01:55 +0000 (0:00:02.101) 0:01:22.131 ****** 2025-01-16 15:05:34.990747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-01-16 15:05:34.990766 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.990779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-01-16 15:05:34.990793 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.990815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-01-16 15:05:34.990829 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.990842 | orchestrator | 2025-01-16 15:05:34.990854 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-01-16 15:05:34.990874 | orchestrator | Thursday 16 January 2025 15:01:57 +0000 (0:00:01.536) 0:01:23.667 ****** 2025-01-16 15:05:34.990901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-01-16 15:05:34.990925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-01-16 15:05:34.990948 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.990998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-01-16 15:05:34.991019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-01-16 15:05:34.991037 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.991048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-01-16 15:05:34.991059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-01-16 15:05:34.991070 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.991080 | orchestrator | 2025-01-16 15:05:34.991091 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-01-16 15:05:34.991115 | orchestrator | Thursday 16 January 2025 15:01:58 +0000 (0:00:01.319) 0:01:24.986 ****** 2025-01-16 15:05:34.991125 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.991136 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.991146 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.991156 | orchestrator | 2025-01-16 15:05:34.991166 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-01-16 15:05:34.991177 | orchestrator | Thursday 16 January 2025 15:01:59 +0000 (0:00:00.256) 0:01:25.243 ****** 2025-01-16 15:05:34.991187 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.991197 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.991207 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.991217 | orchestrator | 2025-01-16 15:05:34.991227 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-01-16 15:05:34.991238 | orchestrator | Thursday 16 January 2025 15:01:59 +0000 (0:00:00.710) 0:01:25.953 ****** 2025-01-16 15:05:34.991249 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.991259 | orchestrator | 2025-01-16 15:05:34.991269 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-01-16 15:05:34.991279 | orchestrator | Thursday 16 January 2025 15:02:00 +0000 (0:00:00.667) 0:01:26.621 ****** 2025-01-16 15:05:34.991290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.991302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.991366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.991468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991541 | orchestrator | 2025-01-16 15:05:34.991552 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-01-16 15:05:34.991572 | orchestrator | Thursday 16 January 2025 15:02:04 +0000 (0:00:03.741) 0:01:30.363 ****** 2025-01-16 15:05:34.991597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.991628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991671 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.991682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.991710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991752 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.991763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.991774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.991824 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.991835 | orchestrator | 2025-01-16 15:05:34.991845 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-01-16 15:05:34.991856 | orchestrator | Thursday 16 January 2025 15:02:04 +0000 (0:00:00.691) 0:01:31.054 ****** 2025-01-16 15:05:34.991866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-01-16 15:05:34.991876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-01-16 15:05:34.991887 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.991897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-01-16 15:05:34.991907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-01-16 15:05:34.991917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-01-16 15:05:34.991928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-01-16 15:05:34.991938 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.991949 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.991959 | orchestrator | 2025-01-16 15:05:34.991970 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-01-16 15:05:34.991980 | orchestrator | Thursday 16 January 2025 15:02:05 +0000 (0:00:00.819) 0:01:31.873 ****** 2025-01-16 15:05:34.991996 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.992015 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.992035 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.992062 | orchestrator | 2025-01-16 15:05:34.992081 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-01-16 15:05:34.992099 | orchestrator | Thursday 16 January 2025 15:02:05 +0000 (0:00:00.302) 0:01:32.176 ****** 2025-01-16 15:05:34.992119 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.992139 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.992156 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.992166 | orchestrator | 2025-01-16 15:05:34.992177 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-01-16 15:05:34.992187 | orchestrator | Thursday 16 January 2025 15:02:06 +0000 (0:00:00.854) 0:01:33.030 ****** 2025-01-16 15:05:34.992197 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.992207 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.992218 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.992228 | orchestrator | 2025-01-16 15:05:34.992238 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-01-16 15:05:34.992249 | orchestrator | Thursday 16 January 2025 15:02:07 +0000 (0:00:00.197) 0:01:33.227 ****** 2025-01-16 15:05:34.992259 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.992269 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.992279 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.992289 | orchestrator | 2025-01-16 15:05:34.992300 | orchestrator | TASK [include_role : designate] ************************************************ 2025-01-16 15:05:34.992310 | orchestrator | Thursday 16 January 2025 15:02:07 +0000 (0:00:00.287) 0:01:33.515 ****** 2025-01-16 15:05:34.992320 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.992330 | orchestrator | 2025-01-16 15:05:34.992341 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-01-16 15:05:34.992351 | orchestrator | Thursday 16 January 2025 15:02:07 +0000 (0:00:00.630) 0:01:34.145 ****** 2025-01-16 15:05:34.992396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:05:34.992409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:05:34.992420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:05:34.992551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:05:34.992578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:05:34.992634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:05:34.992693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.992896 | orchestrator | 2025-01-16 15:05:34.992914 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-01-16 15:05:34.992933 | orchestrator | Thursday 16 January 2025 15:02:11 +0000 (0:00:03.781) 0:01:37.927 ****** 2025-01-16 15:05:34.992951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:05:34.992983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:05:34.993002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993138 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.993156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:05:34.993175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:05:34.993193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993328 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.993347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:05:34.993366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:05:34.993384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.993516 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.993533 | orchestrator | 2025-01-16 15:05:34.993550 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-01-16 15:05:34.993568 | orchestrator | Thursday 16 January 2025 15:02:12 +0000 (0:00:00.651) 0:01:38.579 ****** 2025-01-16 15:05:34.993586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-01-16 15:05:34.993659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-01-16 15:05:34.993683 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.993700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-01-16 15:05:34.993718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-01-16 15:05:34.993735 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.993753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-01-16 15:05:34.993769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-01-16 15:05:34.993786 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.993800 | orchestrator | 2025-01-16 15:05:34.993814 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-01-16 15:05:34.993830 | orchestrator | Thursday 16 January 2025 15:02:13 +0000 (0:00:00.851) 0:01:39.430 ****** 2025-01-16 15:05:34.993854 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.993880 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.993890 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.993899 | orchestrator | 2025-01-16 15:05:34.993908 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-01-16 15:05:34.993917 | orchestrator | Thursday 16 January 2025 15:02:13 +0000 (0:00:00.204) 0:01:39.634 ****** 2025-01-16 15:05:34.993925 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.993934 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.993943 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.993952 | orchestrator | 2025-01-16 15:05:34.993961 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-01-16 15:05:34.993970 | orchestrator | Thursday 16 January 2025 15:02:14 +0000 (0:00:00.831) 0:01:40.465 ****** 2025-01-16 15:05:34.993978 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.993987 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.993995 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.994004 | orchestrator | 2025-01-16 15:05:34.994013 | orchestrator | TASK [include_role : glance] *************************************************** 2025-01-16 15:05:34.994043 | orchestrator | Thursday 16 January 2025 15:02:14 +0000 (0:00:00.288) 0:01:40.753 ****** 2025-01-16 15:05:34.994052 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.994061 | orchestrator | 2025-01-16 15:05:34.994069 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-01-16 15:05:34.994078 | orchestrator | Thursday 16 January 2025 15:02:15 +0000 (0:00:00.676) 0:01:41.430 ****** 2025-01-16 15:05:34.994099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:05:34.994123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:05:34.994144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:05:34.994164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:05:34.994179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:05:34.994189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:05:34.994209 | orchestrator | 2025-01-16 15:05:34.994218 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-01-16 15:05:34.994227 | orchestrator | Thursday 16 January 2025 15:02:18 +0000 (0:00:03.225) 0:01:44.656 ****** 2025-01-16 15:05:34.994248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-01-16 15:05:34.994258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:05:34.994282 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.994303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-01-16 15:05:34.994314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:05:34.994329 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.994349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-01-16 15:05:34.994368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:05:34.994378 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.994387 | orchestrator | 2025-01-16 15:05:34.994396 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-01-16 15:05:34.994404 | orchestrator | Thursday 16 January 2025 15:02:22 +0000 (0:00:04.467) 0:01:49.123 ****** 2025-01-16 15:05:34.994413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-01-16 15:05:34.994428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-01-16 15:05:34.994437 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.994456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-01-16 15:05:34.994466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-01-16 15:05:34.994476 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.994485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-01-16 15:05:34.994494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-01-16 15:05:34.994503 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.994512 | orchestrator | 2025-01-16 15:05:34.994521 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-01-16 15:05:34.994530 | orchestrator | Thursday 16 January 2025 15:02:29 +0000 (0:00:06.503) 0:01:55.627 ****** 2025-01-16 15:05:34.994539 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.994547 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.994556 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.994565 | orchestrator | 2025-01-16 15:05:34.994574 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-01-16 15:05:34.994583 | orchestrator | Thursday 16 January 2025 15:02:29 +0000 (0:00:00.263) 0:01:55.891 ****** 2025-01-16 15:05:34.994592 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.994620 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.994634 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.994644 | orchestrator | 2025-01-16 15:05:34.994653 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-01-16 15:05:34.994662 | orchestrator | Thursday 16 January 2025 15:02:30 +0000 (0:00:01.240) 0:01:57.131 ****** 2025-01-16 15:05:34.994671 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.994680 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.994689 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.994697 | orchestrator | 2025-01-16 15:05:34.994706 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-01-16 15:05:34.994715 | orchestrator | Thursday 16 January 2025 15:02:31 +0000 (0:00:00.503) 0:01:57.635 ****** 2025-01-16 15:05:34.994723 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.994732 | orchestrator | 2025-01-16 15:05:34.994741 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-01-16 15:05:34.994749 | orchestrator | Thursday 16 January 2025 15:02:32 +0000 (0:00:01.153) 0:01:58.789 ****** 2025-01-16 15:05:34.994758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:05:34.994779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:05:34.994789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:05:34.994797 | orchestrator | 2025-01-16 15:05:34.994806 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-01-16 15:05:34.994815 | orchestrator | Thursday 16 January 2025 15:02:37 +0000 (0:00:04.638) 0:02:03.427 ****** 2025-01-16 15:05:34.994824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-01-16 15:05:34.994838 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.994853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-01-16 15:05:34.994863 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.994872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-01-16 15:05:34.994881 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.994889 | orchestrator | 2025-01-16 15:05:34.994898 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-01-16 15:05:34.994906 | orchestrator | Thursday 16 January 2025 15:02:37 +0000 (0:00:00.541) 0:02:03.969 ****** 2025-01-16 15:05:34.994915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-01-16 15:05:34.994927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-01-16 15:05:34.994937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-01-16 15:05:34.994955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-01-16 15:05:34.994965 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.994974 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.994983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-01-16 15:05:34.994992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-01-16 15:05:34.995001 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.995009 | orchestrator | 2025-01-16 15:05:34.995018 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-01-16 15:05:34.995027 | orchestrator | Thursday 16 January 2025 15:02:38 +0000 (0:00:00.842) 0:02:04.811 ****** 2025-01-16 15:05:34.995036 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.995045 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.995054 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.995066 | orchestrator | 2025-01-16 15:05:34.995075 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-01-16 15:05:34.995087 | orchestrator | Thursday 16 January 2025 15:02:38 +0000 (0:00:00.192) 0:02:05.003 ****** 2025-01-16 15:05:34.995096 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.995104 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.995113 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.995121 | orchestrator | 2025-01-16 15:05:34.995130 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-01-16 15:05:34.995139 | orchestrator | Thursday 16 January 2025 15:02:39 +0000 (0:00:00.941) 0:02:05.944 ****** 2025-01-16 15:05:34.995147 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.995156 | orchestrator | 2025-01-16 15:05:34.995165 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-01-16 15:05:34.995173 | orchestrator | Thursday 16 January 2025 15:02:40 +0000 (0:00:01.101) 0:02:07.045 ****** 2025-01-16 15:05:34.995182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.995192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.995210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.995220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.995234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.995249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.995259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.995268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.995287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.995303 | orchestrator | 2025-01-16 15:05:34.995312 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-01-16 15:05:34.995321 | orchestrator | Thursday 16 January 2025 15:02:47 +0000 (0:00:06.384) 0:02:13.430 ****** 2025-01-16 15:05:34.995330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.995345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.995355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.995364 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.995373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.995392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.995406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.995415 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.995430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.995440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.995449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.995458 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.995467 | orchestrator | 2025-01-16 15:05:34.995476 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-01-16 15:05:34.995485 | orchestrator | Thursday 16 January 2025 15:02:47 +0000 (0:00:00.715) 0:02:14.145 ****** 2025-01-16 15:05:34.995494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995546 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.995555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995594 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.995612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-01-16 15:05:34.995648 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.995657 | orchestrator | 2025-01-16 15:05:34.995666 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-01-16 15:05:34.995674 | orchestrator | Thursday 16 January 2025 15:02:48 +0000 (0:00:00.961) 0:02:15.107 ****** 2025-01-16 15:05:34.995683 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.995691 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.995700 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.995709 | orchestrator | 2025-01-16 15:05:34.995717 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-01-16 15:05:34.995726 | orchestrator | Thursday 16 January 2025 15:02:49 +0000 (0:00:00.276) 0:02:15.383 ****** 2025-01-16 15:05:34.995734 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.995743 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.995751 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.995760 | orchestrator | 2025-01-16 15:05:34.995768 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-01-16 15:05:34.995782 | orchestrator | Thursday 16 January 2025 15:02:50 +0000 (0:00:00.821) 0:02:16.204 ****** 2025-01-16 15:05:34.995790 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.995799 | orchestrator | 2025-01-16 15:05:34.995808 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-01-16 15:05:34.995817 | orchestrator | Thursday 16 January 2025 15:02:50 +0000 (0:00:00.718) 0:02:16.923 ****** 2025-01-16 15:05:34.995844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:05:34.995861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:05:34.995886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:05:34.995896 | orchestrator | 2025-01-16 15:05:34.995905 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-01-16 15:05:34.995914 | orchestrator | Thursday 16 January 2025 15:02:53 +0000 (0:00:02.655) 0:02:19.579 ****** 2025-01-16 15:05:34.995938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-01-16 15:05:34.995953 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.995962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-01-16 15:05:34.995977 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.995997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-01-16 15:05:34.996013 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.996023 | orchestrator | 2025-01-16 15:05:34.996032 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-01-16 15:05:34.996041 | orchestrator | Thursday 16 January 2025 15:02:53 +0000 (0:00:00.594) 0:02:20.174 ****** 2025-01-16 15:05:34.996050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-01-16 15:05:34.996060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-01-16 15:05:34.996069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-01-16 15:05:34.996079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-01-16 15:05:34.996089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-01-16 15:05:34.996098 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.996110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-01-16 15:05:34.996123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-01-16 15:05:34.996132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-01-16 15:05:34.996141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-01-16 15:05:34.996150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-01-16 15:05:34.996159 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.996168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-01-16 15:05:34.996186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-01-16 15:05:34.996196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-01-16 15:05:34.996205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-01-16 15:05:34.996214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-01-16 15:05:34.996226 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.996236 | orchestrator | 2025-01-16 15:05:34.996245 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-01-16 15:05:34.996253 | orchestrator | Thursday 16 January 2025 15:02:54 +0000 (0:00:00.811) 0:02:20.985 ****** 2025-01-16 15:05:34.996262 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.996271 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.996280 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.996288 | orchestrator | 2025-01-16 15:05:34.996297 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-01-16 15:05:34.996306 | orchestrator | Thursday 16 January 2025 15:02:55 +0000 (0:00:00.278) 0:02:21.264 ****** 2025-01-16 15:05:34.996314 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.996323 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.996332 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.996345 | orchestrator | 2025-01-16 15:05:34.996354 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-01-16 15:05:34.996363 | orchestrator | Thursday 16 January 2025 15:02:55 +0000 (0:00:00.813) 0:02:22.077 ****** 2025-01-16 15:05:34.996372 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.996381 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.996390 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.996398 | orchestrator | 2025-01-16 15:05:34.996407 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-01-16 15:05:34.996416 | orchestrator | Thursday 16 January 2025 15:02:56 +0000 (0:00:00.275) 0:02:22.352 ****** 2025-01-16 15:05:34.996424 | orchestrator | included: ironic for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.996433 | orchestrator | 2025-01-16 15:05:34.996441 | orchestrator | TASK [haproxy-config : Copying over ironic haproxy config] ********************* 2025-01-16 15:05:34.996450 | orchestrator | Thursday 16 January 2025 15:02:56 +0000 (0:00:00.720) 0:02:23.072 ****** 2025-01-16 15:05:34.996459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.996470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.996492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.996503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.996516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:34.996526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.996536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:05:34.996555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:05:34.996565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:05:34.996578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:05:34.996588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:05:34.996597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:05:34.996647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:05:34.996669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:05:34.996679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:05:34.996688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:05:34.996702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:05:34.996718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:05:34.996726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:05:34.996735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:05:34.996753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:05:34.996762 | orchestrator | 2025-01-16 15:05:34.996770 | orchestrator | TASK [haproxy-config : Add configuration for ironic when using single external frontend] *** 2025-01-16 15:05:34.996779 | orchestrator | Thursday 16 January 2025 15:03:01 +0000 (0:00:05.095) 0:02:28.168 ****** 2025-01-16 15:05:34.996787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.996799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.996813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:05:34.996822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:05:34.996840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:05:34.996849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:05:34.996862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:05:34.996870 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.996879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.996893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.996902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:05:34.996920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:05:34.996933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:05:34.996942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:05:34.996957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.996966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:05:34.996975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:05:34.996995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:05:34.997008 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.997017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:05:34.997025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:05:34.997039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:05:34.997048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:05:34.997057 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.997065 | orchestrator | 2025-01-16 15:05:34.997073 | orchestrator | TASK [haproxy-config : Configuring firewall for ironic] ************************ 2025-01-16 15:05:34.997081 | orchestrator | Thursday 16 January 2025 15:03:03 +0000 (0:00:01.201) 0:02:29.370 ****** 2025-01-16 15:05:34.997090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-01-16 15:05:34.997098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-01-16 15:05:34.997106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-01-16 15:05:34.997114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-01-16 15:05:34.997123 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.997131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-01-16 15:05:34.997143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-01-16 15:05:34.997160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-01-16 15:05:34.997169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-01-16 15:05:34.997179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-01-16 15:05:34.997187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-01-16 15:05:34.997196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-01-16 15:05:34.997204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-01-16 15:05:34.997212 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.997220 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.997228 | orchestrator | 2025-01-16 15:05:34.997236 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL users config] ************* 2025-01-16 15:05:34.997244 | orchestrator | Thursday 16 January 2025 15:03:04 +0000 (0:00:01.086) 0:02:30.456 ****** 2025-01-16 15:05:34.997252 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.997263 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.997272 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.997280 | orchestrator | 2025-01-16 15:05:34.997288 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL rules config] ************* 2025-01-16 15:05:34.997296 | orchestrator | Thursday 16 January 2025 15:03:04 +0000 (0:00:00.283) 0:02:30.740 ****** 2025-01-16 15:05:34.997304 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.997313 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.997321 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.997329 | orchestrator | 2025-01-16 15:05:34.997337 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-01-16 15:05:34.997345 | orchestrator | Thursday 16 January 2025 15:03:05 +0000 (0:00:01.053) 0:02:31.793 ****** 2025-01-16 15:05:34.997353 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.997361 | orchestrator | 2025-01-16 15:05:34.997369 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-01-16 15:05:34.997377 | orchestrator | Thursday 16 January 2025 15:03:06 +0000 (0:00:00.924) 0:02:32.717 ****** 2025-01-16 15:05:34.997386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:05:34.997398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:05:34.997416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:05:34.997425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:05:34.997433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:05:34.997445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:05:34.997460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:05:34.997472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:05:34.997481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:05:34.997489 | orchestrator | 2025-01-16 15:05:34.997498 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-01-16 15:05:34.997506 | orchestrator | Thursday 16 January 2025 15:03:10 +0000 (0:00:03.802) 0:02:36.520 ****** 2025-01-16 15:05:34.997514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-01-16 15:05:34.997529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:05:34.997541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:05:34.997549 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.997568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-01-16 15:05:34.997577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:05:34.997586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:05:34.997595 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.997619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-01-16 15:05:34.997634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:05:34.997643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:05:34.997651 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.997659 | orchestrator | 2025-01-16 15:05:34.997668 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-01-16 15:05:34.997676 | orchestrator | Thursday 16 January 2025 15:03:11 +0000 (0:00:00.673) 0:02:37.193 ****** 2025-01-16 15:05:34.997697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-01-16 15:05:34.997706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-01-16 15:05:34.997715 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.997723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-01-16 15:05:34.997732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-01-16 15:05:34.997740 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.997748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-01-16 15:05:34.997757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-01-16 15:05:34.997765 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.997773 | orchestrator | 2025-01-16 15:05:34.997781 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-01-16 15:05:34.997795 | orchestrator | Thursday 16 January 2025 15:03:12 +0000 (0:00:01.006) 0:02:38.200 ****** 2025-01-16 15:05:34.997804 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.997812 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.997820 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.997828 | orchestrator | 2025-01-16 15:05:34.997836 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-01-16 15:05:34.997844 | orchestrator | Thursday 16 January 2025 15:03:12 +0000 (0:00:00.200) 0:02:38.401 ****** 2025-01-16 15:05:34.997852 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.997860 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.997868 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.997876 | orchestrator | 2025-01-16 15:05:34.997885 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-01-16 15:05:34.997893 | orchestrator | Thursday 16 January 2025 15:03:13 +0000 (0:00:00.876) 0:02:39.277 ****** 2025-01-16 15:05:34.997901 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.997909 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.997917 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.997925 | orchestrator | 2025-01-16 15:05:34.997933 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-01-16 15:05:34.997941 | orchestrator | Thursday 16 January 2025 15:03:13 +0000 (0:00:00.301) 0:02:39.579 ****** 2025-01-16 15:05:34.997949 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.997957 | orchestrator | 2025-01-16 15:05:34.997965 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-01-16 15:05:34.997973 | orchestrator | Thursday 16 January 2025 15:03:14 +0000 (0:00:00.935) 0:02:40.514 ****** 2025-01-16 15:05:34.997981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:05:34.997999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:05:34.998038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:05:34.998066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998074 | orchestrator | 2025-01-16 15:05:34.998082 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-01-16 15:05:34.998091 | orchestrator | Thursday 16 January 2025 15:03:18 +0000 (0:00:03.696) 0:02:44.211 ****** 2025-01-16 15:05:34.998110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:05:34.998125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998138 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.998146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:05:34.998155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998163 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.998171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:05:34.998195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998208 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.998217 | orchestrator | 2025-01-16 15:05:34.998225 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-01-16 15:05:34.998233 | orchestrator | Thursday 16 January 2025 15:03:18 +0000 (0:00:00.768) 0:02:44.979 ****** 2025-01-16 15:05:34.998241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-01-16 15:05:34.998250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-01-16 15:05:34.998258 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.998266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-01-16 15:05:34.998274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-01-16 15:05:34.998282 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.998290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-01-16 15:05:34.998301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-01-16 15:05:34.998315 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.998323 | orchestrator | 2025-01-16 15:05:34.998331 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-01-16 15:05:34.998342 | orchestrator | Thursday 16 January 2025 15:03:19 +0000 (0:00:00.965) 0:02:45.945 ****** 2025-01-16 15:05:34.998350 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.998359 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.998366 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.998374 | orchestrator | 2025-01-16 15:05:34.998382 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-01-16 15:05:34.998390 | orchestrator | Thursday 16 January 2025 15:03:19 +0000 (0:00:00.234) 0:02:46.179 ****** 2025-01-16 15:05:34.998411 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.998419 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.998427 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.998435 | orchestrator | 2025-01-16 15:05:34.998443 | orchestrator | TASK [include_role : manila] *************************************************** 2025-01-16 15:05:34.998452 | orchestrator | Thursday 16 January 2025 15:03:20 +0000 (0:00:00.947) 0:02:47.126 ****** 2025-01-16 15:05:34.998460 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.998468 | orchestrator | 2025-01-16 15:05:34.998475 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-01-16 15:05:34.998483 | orchestrator | Thursday 16 January 2025 15:03:21 +0000 (0:00:00.971) 0:02:48.098 ****** 2025-01-16 15:05:34.998492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-01-16 15:05:34.998516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-01-16 15:05:34.998560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-01-16 15:05:34.998645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998673 | orchestrator | 2025-01-16 15:05:34.998682 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-01-16 15:05:34.998691 | orchestrator | Thursday 16 January 2025 15:03:25 +0000 (0:00:03.867) 0:02:51.965 ****** 2025-01-16 15:05:34.998712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-01-16 15:05:34.998727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998761 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.998770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-01-16 15:05:34.998780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998821 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.998834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-01-16 15:05:34.998842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-01-16 15:05:34.998870 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.998878 | orchestrator | 2025-01-16 15:05:34.998885 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-01-16 15:05:34.998893 | orchestrator | Thursday 16 January 2025 15:03:26 +0000 (0:00:00.684) 0:02:52.650 ****** 2025-01-16 15:05:34.998901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-01-16 15:05:34.998908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-01-16 15:05:34.998916 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.998935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-01-16 15:05:34.998943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-01-16 15:05:34.998951 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.998959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-01-16 15:05:34.998967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-01-16 15:05:34.998974 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.998982 | orchestrator | 2025-01-16 15:05:34.998990 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-01-16 15:05:34.998997 | orchestrator | Thursday 16 January 2025 15:03:27 +0000 (0:00:00.884) 0:02:53.535 ****** 2025-01-16 15:05:34.999005 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999013 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999020 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999028 | orchestrator | 2025-01-16 15:05:34.999035 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-01-16 15:05:34.999043 | orchestrator | Thursday 16 January 2025 15:03:27 +0000 (0:00:00.316) 0:02:53.851 ****** 2025-01-16 15:05:34.999050 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999058 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999066 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999125 | orchestrator | 2025-01-16 15:05:34.999134 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-01-16 15:05:34.999142 | orchestrator | Thursday 16 January 2025 15:03:28 +0000 (0:00:00.902) 0:02:54.753 ****** 2025-01-16 15:05:34.999149 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.999157 | orchestrator | 2025-01-16 15:05:34.999165 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-01-16 15:05:34.999172 | orchestrator | Thursday 16 January 2025 15:03:29 +0000 (0:00:01.046) 0:02:55.800 ****** 2025-01-16 15:05:34.999180 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:05:34.999188 | orchestrator | 2025-01-16 15:05:34.999195 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-01-16 15:05:34.999203 | orchestrator | Thursday 16 January 2025 15:03:32 +0000 (0:00:02.500) 0:02:58.300 ****** 2025-01-16 15:05:34.999214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:05:34.999232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-01-16 15:05:34.999241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:05:34.999254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:05:34.999265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-01-16 15:05:34.999274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-01-16 15:05:34.999282 | orchestrator | 2025-01-16 15:05:34.999289 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-01-16 15:05:34.999297 | orchestrator | Thursday 16 January 2025 15:03:35 +0000 (0:00:03.478) 0:03:01.779 ****** 2025-01-16 15:05:34.999305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-01-16 15:05:34.999316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-01-16 15:05:34.999324 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-01-16 15:05:34.999352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-01-16 15:05:34.999365 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-01-16 15:05:34.999390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-01-16 15:05:34.999398 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999406 | orchestrator | 2025-01-16 15:05:34.999414 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-01-16 15:05:34.999422 | orchestrator | Thursday 16 January 2025 15:03:37 +0000 (0:00:02.175) 0:03:03.954 ****** 2025-01-16 15:05:34.999429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-01-16 15:05:34.999437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-01-16 15:05:34.999448 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-01-16 15:05:34.999464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-01-16 15:05:34.999472 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-01-16 15:05:34.999488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-01-16 15:05:34.999496 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999504 | orchestrator | 2025-01-16 15:05:34.999511 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-01-16 15:05:34.999527 | orchestrator | Thursday 16 January 2025 15:03:40 +0000 (0:00:02.711) 0:03:06.665 ****** 2025-01-16 15:05:34.999536 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999547 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999554 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999562 | orchestrator | 2025-01-16 15:05:34.999570 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-01-16 15:05:34.999578 | orchestrator | Thursday 16 January 2025 15:03:40 +0000 (0:00:00.200) 0:03:06.866 ****** 2025-01-16 15:05:34.999585 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999593 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999600 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999617 | orchestrator | 2025-01-16 15:05:34.999625 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-01-16 15:05:34.999632 | orchestrator | Thursday 16 January 2025 15:03:41 +0000 (0:00:00.884) 0:03:07.750 ****** 2025-01-16 15:05:34.999643 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999651 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999658 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999666 | orchestrator | 2025-01-16 15:05:34.999673 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-01-16 15:05:34.999684 | orchestrator | Thursday 16 January 2025 15:03:41 +0000 (0:00:00.305) 0:03:08.056 ****** 2025-01-16 15:05:34.999691 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:34.999699 | orchestrator | 2025-01-16 15:05:34.999707 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-01-16 15:05:34.999714 | orchestrator | Thursday 16 January 2025 15:03:42 +0000 (0:00:00.993) 0:03:09.049 ****** 2025-01-16 15:05:34.999721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-01-16 15:05:34.999730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-01-16 15:05:34.999739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-01-16 15:05:34.999747 | orchestrator | 2025-01-16 15:05:34.999754 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-01-16 15:05:34.999761 | orchestrator | Thursday 16 January 2025 15:03:43 +0000 (0:00:00.998) 0:03:10.048 ****** 2025-01-16 15:05:34.999778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-01-16 15:05:34.999791 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-01-16 15:05:34.999807 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-01-16 15:05:34.999823 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999831 | orchestrator | 2025-01-16 15:05:34.999838 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-01-16 15:05:34.999846 | orchestrator | Thursday 16 January 2025 15:03:44 +0000 (0:00:00.573) 0:03:10.622 ****** 2025-01-16 15:05:34.999854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-01-16 15:05:34.999861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-01-16 15:05:34.999869 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999877 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-01-16 15:05:34.999892 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999900 | orchestrator | 2025-01-16 15:05:34.999907 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-01-16 15:05:34.999915 | orchestrator | Thursday 16 January 2025 15:03:45 +0000 (0:00:00.842) 0:03:11.464 ****** 2025-01-16 15:05:34.999922 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999930 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999937 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999945 | orchestrator | 2025-01-16 15:05:34.999952 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-01-16 15:05:34.999960 | orchestrator | Thursday 16 January 2025 15:03:45 +0000 (0:00:00.260) 0:03:11.725 ****** 2025-01-16 15:05:34.999967 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:34.999979 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:34.999986 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:34.999994 | orchestrator | 2025-01-16 15:05:35.000002 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-01-16 15:05:35.000009 | orchestrator | Thursday 16 January 2025 15:03:46 +0000 (0:00:00.967) 0:03:12.693 ****** 2025-01-16 15:05:35.000017 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.000025 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.000033 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.000040 | orchestrator | 2025-01-16 15:05:35.000048 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-01-16 15:05:35.000056 | orchestrator | Thursday 16 January 2025 15:03:46 +0000 (0:00:00.335) 0:03:13.028 ****** 2025-01-16 15:05:35.000072 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:35.000080 | orchestrator | 2025-01-16 15:05:35.000088 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-01-16 15:05:35.000096 | orchestrator | Thursday 16 January 2025 15:03:47 +0000 (0:00:01.133) 0:03:14.162 ****** 2025-01-16 15:05:35.000104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:05:35.000112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:05:35.000158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.000203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:05:35.000221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.000245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:05:35.000299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:05:35.000307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.000327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.000381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:05:35.000389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.000423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:05:35.000475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:05:35.000484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.000503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.000552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.000571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:05:35.000623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.000632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000644 | orchestrator | 2025-01-16 15:05:35.000652 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-01-16 15:05:35.000660 | orchestrator | Thursday 16 January 2025 15:03:51 +0000 (0:00:03.437) 0:03:17.599 ****** 2025-01-16 15:05:35.000668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:05:35.000676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:05:35.000721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.000777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.000796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:05:35.000834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.000845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000854 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.000867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:05:35.000875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:05:35.000920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:05:35.000950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.000966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.000986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.001009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:05:35.001049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.001058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.001079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.001087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.001112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:05:35.001123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.001144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.001152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001160 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.001177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.001202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:05:35.001210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:05:35.001227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:05:35.001243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001256 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.001264 | orchestrator | 2025-01-16 15:05:35.001272 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-01-16 15:05:35.001279 | orchestrator | Thursday 16 January 2025 15:03:52 +0000 (0:00:01.312) 0:03:18.912 ****** 2025-01-16 15:05:35.001287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-01-16 15:05:35.001295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-01-16 15:05:35.001302 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.001310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-01-16 15:05:35.001318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-01-16 15:05:35.001325 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.001333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-01-16 15:05:35.001341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-01-16 15:05:35.001349 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.001356 | orchestrator | 2025-01-16 15:05:35.001364 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-01-16 15:05:35.001371 | orchestrator | Thursday 16 January 2025 15:03:53 +0000 (0:00:01.253) 0:03:20.166 ****** 2025-01-16 15:05:35.001379 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.001386 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.001394 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.001402 | orchestrator | 2025-01-16 15:05:35.001409 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-01-16 15:05:35.001417 | orchestrator | Thursday 16 January 2025 15:03:54 +0000 (0:00:00.309) 0:03:20.475 ****** 2025-01-16 15:05:35.001424 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.001432 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.001440 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.001447 | orchestrator | 2025-01-16 15:05:35.001455 | orchestrator | TASK [include_role : placement] ************************************************ 2025-01-16 15:05:35.001462 | orchestrator | Thursday 16 January 2025 15:03:55 +0000 (0:00:00.925) 0:03:21.400 ****** 2025-01-16 15:05:35.001470 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:35.001477 | orchestrator | 2025-01-16 15:05:35.001485 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-01-16 15:05:35.001494 | orchestrator | Thursday 16 January 2025 15:03:56 +0000 (0:00:00.985) 0:03:22.386 ****** 2025-01-16 15:05:35.001507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.001528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.001537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.001545 | orchestrator | 2025-01-16 15:05:35.001553 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-01-16 15:05:35.001561 | orchestrator | Thursday 16 January 2025 15:03:58 +0000 (0:00:02.535) 0:03:24.921 ****** 2025-01-16 15:05:35.001568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.001576 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.001589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.001601 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.001625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.001634 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.001642 | orchestrator | 2025-01-16 15:05:35.001650 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-01-16 15:05:35.001658 | orchestrator | Thursday 16 January 2025 15:03:59 +0000 (0:00:00.470) 0:03:25.392 ****** 2025-01-16 15:05:35.001665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-01-16 15:05:35.001676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-01-16 15:05:35.001683 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.001691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-01-16 15:05:35.001698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-01-16 15:05:35.001706 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.001713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-01-16 15:05:35.001721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-01-16 15:05:35.001728 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.001736 | orchestrator | 2025-01-16 15:05:35.001743 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-01-16 15:05:35.001751 | orchestrator | Thursday 16 January 2025 15:03:59 +0000 (0:00:00.724) 0:03:26.116 ****** 2025-01-16 15:05:35.001759 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.001766 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.001779 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.001787 | orchestrator | 2025-01-16 15:05:35.001795 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-01-16 15:05:35.001803 | orchestrator | Thursday 16 January 2025 15:04:00 +0000 (0:00:00.312) 0:03:26.428 ****** 2025-01-16 15:05:35.001811 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.001818 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.001826 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.001834 | orchestrator | 2025-01-16 15:05:35.001842 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-01-16 15:05:35.001849 | orchestrator | Thursday 16 January 2025 15:04:01 +0000 (0:00:00.942) 0:03:27.371 ****** 2025-01-16 15:05:35.001857 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:35.001865 | orchestrator | 2025-01-16 15:05:35.001872 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-01-16 15:05:35.001880 | orchestrator | Thursday 16 January 2025 15:04:02 +0000 (0:00:01.133) 0:03:28.505 ****** 2025-01-16 15:05:35.001897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.001913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.001942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.001982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.001990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.002002 | orchestrator | 2025-01-16 15:05:35.002010 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-01-16 15:05:35.002032 | orchestrator | Thursday 16 January 2025 15:04:05 +0000 (0:00:03.465) 0:03:31.970 ****** 2025-01-16 15:05:35.002041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.002054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.002062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.002070 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.002095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.002104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.002112 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.002144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.002156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.002173 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.002184 | orchestrator | 2025-01-16 15:05:35.002196 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-01-16 15:05:35.002208 | orchestrator | Thursday 16 January 2025 15:04:06 +0000 (0:00:00.562) 0:03:32.532 ****** 2025-01-16 15:05:35.002217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002246 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002285 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-01-16 15:05:35.002331 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.002338 | orchestrator | 2025-01-16 15:05:35.002345 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-01-16 15:05:35.002352 | orchestrator | Thursday 16 January 2025 15:04:07 +0000 (0:00:00.915) 0:03:33.448 ****** 2025-01-16 15:05:35.002360 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002371 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002378 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.002385 | orchestrator | 2025-01-16 15:05:35.002392 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-01-16 15:05:35.002399 | orchestrator | Thursday 16 January 2025 15:04:07 +0000 (0:00:00.323) 0:03:33.771 ****** 2025-01-16 15:05:35.002406 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002413 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002420 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.002427 | orchestrator | 2025-01-16 15:05:35.002434 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-01-16 15:05:35.002441 | orchestrator | Thursday 16 January 2025 15:04:08 +0000 (0:00:00.806) 0:03:34.577 ****** 2025-01-16 15:05:35.002448 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:35.002455 | orchestrator | 2025-01-16 15:05:35.002462 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-01-16 15:05:35.002470 | orchestrator | Thursday 16 January 2025 15:04:09 +0000 (0:00:01.200) 0:03:35.778 ****** 2025-01-16 15:05:35.002477 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-01-16 15:05:35.002484 | orchestrator | 2025-01-16 15:05:35.002491 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-01-16 15:05:35.002498 | orchestrator | Thursday 16 January 2025 15:04:10 +0000 (0:00:00.972) 0:03:36.751 ****** 2025-01-16 15:05:35.002506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-01-16 15:05:35.002514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-01-16 15:05:35.002521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-01-16 15:05:35.002529 | orchestrator | 2025-01-16 15:05:35.002536 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-01-16 15:05:35.002543 | orchestrator | Thursday 16 January 2025 15:04:13 +0000 (0:00:03.238) 0:03:39.990 ****** 2025-01-16 15:05:35.002557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-01-16 15:05:35.002568 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-01-16 15:05:35.002593 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-01-16 15:05:35.002633 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.002641 | orchestrator | 2025-01-16 15:05:35.002649 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-01-16 15:05:35.002656 | orchestrator | Thursday 16 January 2025 15:04:14 +0000 (0:00:01.147) 0:03:41.137 ****** 2025-01-16 15:05:35.002663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-01-16 15:05:35.002672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-01-16 15:05:35.002679 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-01-16 15:05:35.002694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-01-16 15:05:35.002704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-01-16 15:05:35.002711 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.002718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-01-16 15:05:35.002725 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002733 | orchestrator | 2025-01-16 15:05:35.002740 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-01-16 15:05:35.002747 | orchestrator | Thursday 16 January 2025 15:04:16 +0000 (0:00:01.374) 0:03:42.512 ****** 2025-01-16 15:05:35.002754 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002761 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002768 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.002775 | orchestrator | 2025-01-16 15:05:35.002786 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-01-16 15:05:35.002793 | orchestrator | Thursday 16 January 2025 15:04:16 +0000 (0:00:00.200) 0:03:42.712 ****** 2025-01-16 15:05:35.002804 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002812 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002819 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.002826 | orchestrator | 2025-01-16 15:05:35.002834 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-01-16 15:05:35.002841 | orchestrator | Thursday 16 January 2025 15:04:17 +0000 (0:00:00.815) 0:03:43.528 ****** 2025-01-16 15:05:35.002848 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-01-16 15:05:35.002856 | orchestrator | 2025-01-16 15:05:35.002863 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-01-16 15:05:35.002870 | orchestrator | Thursday 16 January 2025 15:04:18 +0000 (0:00:00.806) 0:03:44.334 ****** 2025-01-16 15:05:35.002886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-01-16 15:05:35.002894 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-01-16 15:05:35.002909 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-01-16 15:05:35.002924 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.002932 | orchestrator | 2025-01-16 15:05:35.002939 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-01-16 15:05:35.002946 | orchestrator | Thursday 16 January 2025 15:04:19 +0000 (0:00:01.180) 0:03:45.515 ****** 2025-01-16 15:05:35.002953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-01-16 15:05:35.002960 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.002968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-01-16 15:05:35.002981 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.002994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-01-16 15:05:35.003002 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003010 | orchestrator | 2025-01-16 15:05:35.003017 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-01-16 15:05:35.003024 | orchestrator | Thursday 16 January 2025 15:04:20 +0000 (0:00:01.209) 0:03:46.724 ****** 2025-01-16 15:05:35.003031 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003038 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003045 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003052 | orchestrator | 2025-01-16 15:05:35.003059 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-01-16 15:05:35.003066 | orchestrator | Thursday 16 January 2025 15:04:21 +0000 (0:00:01.228) 0:03:47.953 ****** 2025-01-16 15:05:35.003073 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003080 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003087 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003094 | orchestrator | 2025-01-16 15:05:35.003101 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-01-16 15:05:35.003108 | orchestrator | Thursday 16 January 2025 15:04:22 +0000 (0:00:00.339) 0:03:48.293 ****** 2025-01-16 15:05:35.003115 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003122 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003129 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003136 | orchestrator | 2025-01-16 15:05:35.003152 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-01-16 15:05:35.003159 | orchestrator | Thursday 16 January 2025 15:04:22 +0000 (0:00:00.715) 0:03:49.008 ****** 2025-01-16 15:05:35.003166 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-01-16 15:05:35.003173 | orchestrator | 2025-01-16 15:05:35.003180 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-01-16 15:05:35.003187 | orchestrator | Thursday 16 January 2025 15:04:23 +0000 (0:00:00.994) 0:03:50.003 ****** 2025-01-16 15:05:35.003194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-01-16 15:05:35.003200 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-01-16 15:05:35.003213 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-01-16 15:05:35.003230 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003236 | orchestrator | 2025-01-16 15:05:35.003242 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-01-16 15:05:35.003249 | orchestrator | Thursday 16 January 2025 15:04:25 +0000 (0:00:01.499) 0:03:51.502 ****** 2025-01-16 15:05:35.003255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-01-16 15:05:35.003261 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-01-16 15:05:35.003274 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-01-16 15:05:35.003287 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003294 | orchestrator | 2025-01-16 15:05:35.003308 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-01-16 15:05:35.003314 | orchestrator | Thursday 16 January 2025 15:04:26 +0000 (0:00:01.554) 0:03:53.056 ****** 2025-01-16 15:05:35.003321 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003327 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003334 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003340 | orchestrator | 2025-01-16 15:05:35.003346 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-01-16 15:05:35.003352 | orchestrator | Thursday 16 January 2025 15:04:28 +0000 (0:00:01.707) 0:03:54.763 ****** 2025-01-16 15:05:35.003359 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003365 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003371 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003378 | orchestrator | 2025-01-16 15:05:35.003384 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-01-16 15:05:35.003390 | orchestrator | Thursday 16 January 2025 15:04:28 +0000 (0:00:00.352) 0:03:55.116 ****** 2025-01-16 15:05:35.003396 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003403 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003412 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003419 | orchestrator | 2025-01-16 15:05:35.003425 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-01-16 15:05:35.003431 | orchestrator | Thursday 16 January 2025 15:04:29 +0000 (0:00:01.003) 0:03:56.120 ****** 2025-01-16 15:05:35.003438 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:35.003444 | orchestrator | 2025-01-16 15:05:35.003450 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-01-16 15:05:35.003456 | orchestrator | Thursday 16 January 2025 15:04:31 +0000 (0:00:01.194) 0:03:57.314 ****** 2025-01-16 15:05:35.003463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.003474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-01-16 15:05:35.003481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.003514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.003525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-01-16 15:05:35.003532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.003562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.003572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-01-16 15:05:35.003584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.003613 | orchestrator | 2025-01-16 15:05:35.003620 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-01-16 15:05:35.003627 | orchestrator | Thursday 16 January 2025 15:04:35 +0000 (0:00:04.668) 0:04:01.983 ****** 2025-01-16 15:05:35.003633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.003648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-01-16 15:05:35.003662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.003683 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.003697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-01-16 15:05:35.003715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.003741 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.003755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-01-16 15:05:35.003762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-01-16 15:05:35.003792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:05:35.003799 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003806 | orchestrator | 2025-01-16 15:05:35.003813 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-01-16 15:05:35.003821 | orchestrator | Thursday 16 January 2025 15:04:36 +0000 (0:00:00.726) 0:04:02.710 ****** 2025-01-16 15:05:35.003828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-01-16 15:05:35.003834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-01-16 15:05:35.003841 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-01-16 15:05:35.003854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-01-16 15:05:35.003861 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-01-16 15:05:35.003878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-01-16 15:05:35.003885 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003891 | orchestrator | 2025-01-16 15:05:35.003897 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-01-16 15:05:35.003904 | orchestrator | Thursday 16 January 2025 15:04:37 +0000 (0:00:00.915) 0:04:03.626 ****** 2025-01-16 15:05:35.003910 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003916 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003922 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003928 | orchestrator | 2025-01-16 15:05:35.003935 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-01-16 15:05:35.003941 | orchestrator | Thursday 16 January 2025 15:04:37 +0000 (0:00:00.345) 0:04:03.972 ****** 2025-01-16 15:05:35.003947 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.003954 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.003960 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.003969 | orchestrator | 2025-01-16 15:05:35.003976 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-01-16 15:05:35.003982 | orchestrator | Thursday 16 January 2025 15:04:38 +0000 (0:00:00.863) 0:04:04.835 ****** 2025-01-16 15:05:35.003989 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:35.003995 | orchestrator | 2025-01-16 15:05:35.004001 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-01-16 15:05:35.004007 | orchestrator | Thursday 16 January 2025 15:04:39 +0000 (0:00:01.313) 0:04:06.149 ****** 2025-01-16 15:05:35.004014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:05:35.004033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:05:35.004041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:05:35.004048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:05:35.004058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:05:35.004091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:05:35.004103 | orchestrator | 2025-01-16 15:05:35.004110 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-01-16 15:05:35.004117 | orchestrator | Thursday 16 January 2025 15:04:45 +0000 (0:00:06.027) 0:04:12.176 ****** 2025-01-16 15:05:35.004123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-01-16 15:05:35.004130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-01-16 15:05:35.004140 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.004146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-01-16 15:05:35.004161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-01-16 15:05:35.004173 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.004179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-01-16 15:05:35.004186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-01-16 15:05:35.004196 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.004202 | orchestrator | 2025-01-16 15:05:35.004209 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-01-16 15:05:35.004215 | orchestrator | Thursday 16 January 2025 15:04:46 +0000 (0:00:00.833) 0:04:13.010 ****** 2025-01-16 15:05:35.004221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-01-16 15:05:35.004228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-01-16 15:05:35.004234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-01-16 15:05:35.004241 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.004247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-01-16 15:05:35.004254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-01-16 15:05:35.004268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-01-16 15:05:35.004275 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.004281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-01-16 15:05:35.004288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-01-16 15:05:35.004294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-01-16 15:05:35.004301 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.004307 | orchestrator | 2025-01-16 15:05:35.004314 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-01-16 15:05:35.004320 | orchestrator | Thursday 16 January 2025 15:04:47 +0000 (0:00:01.077) 0:04:14.087 ****** 2025-01-16 15:05:35.004326 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.004332 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.004339 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.004345 | orchestrator | 2025-01-16 15:05:35.004351 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-01-16 15:05:35.004357 | orchestrator | Thursday 16 January 2025 15:04:48 +0000 (0:00:00.397) 0:04:14.485 ****** 2025-01-16 15:05:35.004367 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.004373 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.004379 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.004385 | orchestrator | 2025-01-16 15:05:35.004392 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-01-16 15:05:35.004398 | orchestrator | Thursday 16 January 2025 15:04:49 +0000 (0:00:01.050) 0:04:15.536 ****** 2025-01-16 15:05:35.004404 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:35.004410 | orchestrator | 2025-01-16 15:05:35.004416 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-01-16 15:05:35.004423 | orchestrator | Thursday 16 January 2025 15:04:50 +0000 (0:00:01.299) 0:04:16.835 ****** 2025-01-16 15:05:35.004429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-01-16 15:05:35.004436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:05:35.004442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-01-16 15:05:35.004470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:05:35.004491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-01-16 15:05:35.004526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:05:35.004536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-01-16 15:05:35.004577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:05:35.004584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-01-16 15:05:35.004614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:05:35.004628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-01-16 15:05:35.004700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:05:35.004711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004745 | orchestrator | 2025-01-16 15:05:35.004752 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-01-16 15:05:35.004758 | orchestrator | Thursday 16 January 2025 15:04:54 +0000 (0:00:03.527) 0:04:20.363 ****** 2025-01-16 15:05:35.004765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:05:35.004771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:05:35.004778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:05:35.004816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:05:35.004823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004858 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.004868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:05:35.004875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:05:35.004882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:05:35.004920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:05:35.004926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.004947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004953 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.004967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:05:35.004974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:05:35.004984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.004998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.005005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:05:35.005020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:05:35.005031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.005037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.005044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:05:35.005050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:05:35.005057 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005064 | orchestrator | 2025-01-16 15:05:35.005070 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-01-16 15:05:35.005076 | orchestrator | Thursday 16 January 2025 15:04:55 +0000 (0:00:00.998) 0:04:21.361 ****** 2025-01-16 15:05:35.005082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-01-16 15:05:35.005089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-01-16 15:05:35.005095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-01-16 15:05:35.005102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-01-16 15:05:35.005112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-01-16 15:05:35.005118 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-01-16 15:05:35.005135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-01-16 15:05:35.005142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-01-16 15:05:35.005148 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-01-16 15:05:35.005161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-01-16 15:05:35.005167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-01-16 15:05:35.005173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-01-16 15:05:35.005179 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005186 | orchestrator | 2025-01-16 15:05:35.005192 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-01-16 15:05:35.005198 | orchestrator | Thursday 16 January 2025 15:04:56 +0000 (0:00:01.423) 0:04:22.785 ****** 2025-01-16 15:05:35.005205 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005211 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005217 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005224 | orchestrator | 2025-01-16 15:05:35.005230 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-01-16 15:05:35.005237 | orchestrator | Thursday 16 January 2025 15:04:57 +0000 (0:00:00.433) 0:04:23.219 ****** 2025-01-16 15:05:35.005243 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005249 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005256 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005262 | orchestrator | 2025-01-16 15:05:35.005268 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-01-16 15:05:35.005274 | orchestrator | Thursday 16 January 2025 15:04:58 +0000 (0:00:01.073) 0:04:24.293 ****** 2025-01-16 15:05:35.005284 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:35.005290 | orchestrator | 2025-01-16 15:05:35.005299 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-01-16 15:05:35.005305 | orchestrator | Thursday 16 January 2025 15:04:59 +0000 (0:00:01.167) 0:04:25.461 ****** 2025-01-16 15:05:35.005312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:05:35.005326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:05:35.005333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-01-16 15:05:35.005340 | orchestrator | 2025-01-16 15:05:35.005347 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-01-16 15:05:35.005353 | orchestrator | Thursday 16 January 2025 15:05:01 +0000 (0:00:02.265) 0:04:27.727 ****** 2025-01-16 15:05:35.005360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-01-16 15:05:35.005370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-01-16 15:05:35.005380 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005387 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-01-16 15:05:35.005403 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005412 | orchestrator | 2025-01-16 15:05:35.005418 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-01-16 15:05:35.005424 | orchestrator | Thursday 16 January 2025 15:05:01 +0000 (0:00:00.454) 0:04:28.181 ****** 2025-01-16 15:05:35.005431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-01-16 15:05:35.005438 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-01-16 15:05:35.005450 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-01-16 15:05:35.005463 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005469 | orchestrator | 2025-01-16 15:05:35.005476 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-01-16 15:05:35.005482 | orchestrator | Thursday 16 January 2025 15:05:02 +0000 (0:00:00.620) 0:04:28.802 ****** 2025-01-16 15:05:35.005491 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005498 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005504 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005510 | orchestrator | 2025-01-16 15:05:35.005516 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-01-16 15:05:35.005523 | orchestrator | Thursday 16 January 2025 15:05:02 +0000 (0:00:00.370) 0:04:29.173 ****** 2025-01-16 15:05:35.005529 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005535 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005541 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005548 | orchestrator | 2025-01-16 15:05:35.005554 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-01-16 15:05:35.005560 | orchestrator | Thursday 16 January 2025 15:05:04 +0000 (0:00:01.029) 0:04:30.203 ****** 2025-01-16 15:05:35.005567 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:05:35.005573 | orchestrator | 2025-01-16 15:05:35.005579 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-01-16 15:05:35.005585 | orchestrator | Thursday 16 January 2025 15:05:05 +0000 (0:00:01.319) 0:04:31.522 ****** 2025-01-16 15:05:35.005592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.005599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.005625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.005636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.005643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.005650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-01-16 15:05:35.005656 | orchestrator | 2025-01-16 15:05:35.005663 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-01-16 15:05:35.005669 | orchestrator | Thursday 16 January 2025 15:05:10 +0000 (0:00:05.123) 0:04:36.646 ****** 2025-01-16 15:05:35.005684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.005695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.005701 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.005715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.005721 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.005742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-01-16 15:05:35.005751 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005758 | orchestrator | 2025-01-16 15:05:35.005764 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-01-16 15:05:35.005770 | orchestrator | Thursday 16 January 2025 15:05:11 +0000 (0:00:00.613) 0:04:37.259 ****** 2025-01-16 15:05:35.005777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005803 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005839 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-01-16 15:05:35.005877 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005883 | orchestrator | 2025-01-16 15:05:35.005890 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-01-16 15:05:35.005896 | orchestrator | Thursday 16 January 2025 15:05:12 +0000 (0:00:00.962) 0:04:38.222 ****** 2025-01-16 15:05:35.005902 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005908 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005915 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005921 | orchestrator | 2025-01-16 15:05:35.005928 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-01-16 15:05:35.005934 | orchestrator | Thursday 16 January 2025 15:05:12 +0000 (0:00:00.212) 0:04:38.434 ****** 2025-01-16 15:05:35.005940 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005947 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005953 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005959 | orchestrator | 2025-01-16 15:05:35.005965 | orchestrator | TASK [include_role : swift] **************************************************** 2025-01-16 15:05:35.005972 | orchestrator | Thursday 16 January 2025 15:05:13 +0000 (0:00:01.065) 0:04:39.500 ****** 2025-01-16 15:05:35.005978 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.005984 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.005991 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.005997 | orchestrator | 2025-01-16 15:05:35.006003 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-01-16 15:05:35.006009 | orchestrator | Thursday 16 January 2025 15:05:13 +0000 (0:00:00.371) 0:04:39.871 ****** 2025-01-16 15:05:35.006041 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006048 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006054 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006060 | orchestrator | 2025-01-16 15:05:35.006067 | orchestrator | TASK [include_role : trove] **************************************************** 2025-01-16 15:05:35.006073 | orchestrator | Thursday 16 January 2025 15:05:14 +0000 (0:00:00.346) 0:04:40.217 ****** 2025-01-16 15:05:35.006079 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006086 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006092 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006098 | orchestrator | 2025-01-16 15:05:35.006105 | orchestrator | TASK [include_role : venus] **************************************************** 2025-01-16 15:05:35.006111 | orchestrator | Thursday 16 January 2025 15:05:14 +0000 (0:00:00.362) 0:04:40.580 ****** 2025-01-16 15:05:35.006117 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006123 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006130 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006136 | orchestrator | 2025-01-16 15:05:35.006145 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-01-16 15:05:35.006151 | orchestrator | Thursday 16 January 2025 15:05:14 +0000 (0:00:00.193) 0:04:40.773 ****** 2025-01-16 15:05:35.006157 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006164 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006170 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006176 | orchestrator | 2025-01-16 15:05:35.006183 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-01-16 15:05:35.006189 | orchestrator | Thursday 16 January 2025 15:05:14 +0000 (0:00:00.351) 0:04:41.124 ****** 2025-01-16 15:05:35.006195 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006202 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006208 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006214 | orchestrator | 2025-01-16 15:05:35.006220 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-01-16 15:05:35.006230 | orchestrator | Thursday 16 January 2025 15:05:15 +0000 (0:00:00.640) 0:04:41.765 ****** 2025-01-16 15:05:35.006236 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:35.006243 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:35.006249 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:35.006255 | orchestrator | 2025-01-16 15:05:35.006261 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-01-16 15:05:35.006267 | orchestrator | Thursday 16 January 2025 15:05:16 +0000 (0:00:00.428) 0:04:42.194 ****** 2025-01-16 15:05:35.006273 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:35.006280 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:35.006286 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:35.006292 | orchestrator | 2025-01-16 15:05:35.006298 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-01-16 15:05:35.006304 | orchestrator | Thursday 16 January 2025 15:05:16 +0000 (0:00:00.378) 0:04:42.572 ****** 2025-01-16 15:05:35.006310 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:35.006317 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:35.006323 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:35.006329 | orchestrator | 2025-01-16 15:05:35.006335 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-01-16 15:05:35.006342 | orchestrator | Thursday 16 January 2025 15:05:17 +0000 (0:00:00.783) 0:04:43.356 ****** 2025-01-16 15:05:35.006348 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:35.006354 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:35.006360 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:35.006366 | orchestrator | 2025-01-16 15:05:35.006373 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-01-16 15:05:35.006379 | orchestrator | Thursday 16 January 2025 15:05:17 +0000 (0:00:00.662) 0:04:44.019 ****** 2025-01-16 15:05:35.006385 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:35.006391 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:35.006397 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:35.006408 | orchestrator | 2025-01-16 15:05:35.006420 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-01-16 15:05:35.006430 | orchestrator | Thursday 16 January 2025 15:05:18 +0000 (0:00:00.818) 0:04:44.837 ****** 2025-01-16 15:05:35.006441 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:05:35.006450 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:05:35.006460 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:05:35.006470 | orchestrator | 2025-01-16 15:05:35.006486 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-01-16 15:05:35.006497 | orchestrator | Thursday 16 January 2025 15:05:21 +0000 (0:00:03.075) 0:04:47.913 ****** 2025-01-16 15:05:35.006508 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:35.006519 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:35.006530 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:35.006541 | orchestrator | 2025-01-16 15:05:35.006552 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-01-16 15:05:35.006564 | orchestrator | Thursday 16 January 2025 15:05:23 +0000 (0:00:01.805) 0:04:49.719 ****** 2025-01-16 15:05:35.006574 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006582 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006592 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006601 | orchestrator | 2025-01-16 15:05:35.006643 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-01-16 15:05:35.006649 | orchestrator | Thursday 16 January 2025 15:05:24 +0000 (0:00:00.556) 0:04:50.275 ****** 2025-01-16 15:05:35.006655 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:05:35.006667 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:05:35.006673 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:05:35.006680 | orchestrator | 2025-01-16 15:05:35.006686 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-01-16 15:05:35.006692 | orchestrator | Thursday 16 January 2025 15:05:26 +0000 (0:00:02.804) 0:04:53.079 ****** 2025-01-16 15:05:35.006704 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006710 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006716 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006723 | orchestrator | 2025-01-16 15:05:35.006729 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-01-16 15:05:35.006735 | orchestrator | Thursday 16 January 2025 15:05:27 +0000 (0:00:00.362) 0:04:53.442 ****** 2025-01-16 15:05:35.006741 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006748 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006754 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006760 | orchestrator | 2025-01-16 15:05:35.006766 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-01-16 15:05:35.006773 | orchestrator | Thursday 16 January 2025 15:05:27 +0000 (0:00:00.367) 0:04:53.809 ****** 2025-01-16 15:05:35.006779 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006785 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006791 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006801 | orchestrator | 2025-01-16 15:05:35.006812 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-01-16 15:05:35.006824 | orchestrator | Thursday 16 January 2025 15:05:27 +0000 (0:00:00.211) 0:04:54.021 ****** 2025-01-16 15:05:35.006835 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006847 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006858 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006870 | orchestrator | 2025-01-16 15:05:35.006881 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-01-16 15:05:35.006892 | orchestrator | Thursday 16 January 2025 15:05:28 +0000 (0:00:00.367) 0:04:54.388 ****** 2025-01-16 15:05:35.006899 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006905 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006912 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006918 | orchestrator | 2025-01-16 15:05:35.006928 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-01-16 15:05:35.006934 | orchestrator | Thursday 16 January 2025 15:05:28 +0000 (0:00:00.600) 0:04:54.989 ****** 2025-01-16 15:05:35.006940 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.006947 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.006953 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.006959 | orchestrator | 2025-01-16 15:05:35.006965 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-01-16 15:05:35.006971 | orchestrator | Thursday 16 January 2025 15:05:29 +0000 (0:00:00.221) 0:04:55.210 ****** 2025-01-16 15:05:35.006978 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:05:35.006984 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:05:35.006990 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:05:35.006996 | orchestrator | 2025-01-16 15:05:35.007003 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-01-16 15:05:35.007009 | orchestrator | Thursday 16 January 2025 15:05:33 +0000 (0:00:04.733) 0:04:59.944 ****** 2025-01-16 15:05:35.007015 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:05:35.007022 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:05:35.007028 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:05:35.007086 | orchestrator | 2025-01-16 15:05:35.007093 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:05:35.007100 | orchestrator | testbed-node-0 : ok=85  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-01-16 15:05:35.007106 | orchestrator | testbed-node-1 : ok=84  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-01-16 15:05:35.007112 | orchestrator | testbed-node-2 : ok=84  changed=42  unreachable=0 failed=0 skipped=138  rescued=0 ignored=0 2025-01-16 15:05:35.007118 | orchestrator | 2025-01-16 15:05:35.007124 | orchestrator | 2025-01-16 15:05:35.007135 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:05:35.007141 | orchestrator | Thursday 16 January 2025 15:05:34 +0000 (0:00:00.523) 0:05:00.467 ****** 2025-01-16 15:05:35.007147 | orchestrator | =============================================================================== 2025-01-16 15:05:35.007152 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 6.50s 2025-01-16 15:05:35.007158 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 6.38s 2025-01-16 15:05:35.007164 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.13s 2025-01-16 15:05:35.007170 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.03s 2025-01-16 15:05:35.007181 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.12s 2025-01-16 15:05:38.000209 | orchestrator | haproxy-config : Copying over ironic haproxy config --------------------- 5.10s 2025-01-16 15:05:38.000309 | orchestrator | loadbalancer : Removing checks for services which are disabled ---------- 4.86s 2025-01-16 15:05:38.000318 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.73s 2025-01-16 15:05:38.000324 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.67s 2025-01-16 15:05:38.000329 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.64s 2025-01-16 15:05:38.000334 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.47s 2025-01-16 15:05:38.000341 | orchestrator | loadbalancer : Ensuring haproxy service config subdir exists ------------ 4.05s 2025-01-16 15:05:38.000346 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 3.98s 2025-01-16 15:05:38.000351 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.87s 2025-01-16 15:05:38.000356 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.80s 2025-01-16 15:05:38.000361 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.78s 2025-01-16 15:05:38.000366 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.78s 2025-01-16 15:05:38.000371 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.74s 2025-01-16 15:05:38.000376 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.70s 2025-01-16 15:05:38.000381 | orchestrator | loadbalancer : Ensuring config directories exist ------------------------ 3.68s 2025-01-16 15:05:38.000385 | orchestrator | 2025-01-16 15:05:34 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:38.000404 | orchestrator | 2025-01-16 15:05:37 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:41.021534 | orchestrator | 2025-01-16 15:05:37 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:05:41.021693 | orchestrator | 2025-01-16 15:05:37 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:05:41.021719 | orchestrator | 2025-01-16 15:05:37 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:41.021752 | orchestrator | 2025-01-16 15:05:41 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:41.021831 | orchestrator | 2025-01-16 15:05:41 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:05:41.021999 | orchestrator | 2025-01-16 15:05:41 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:05:44.053355 | orchestrator | 2025-01-16 15:05:41 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:44.053504 | orchestrator | 2025-01-16 15:05:44 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:47.076274 | orchestrator | 2025-01-16 15:05:44 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:05:47.076402 | orchestrator | 2025-01-16 15:05:44 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:05:47.076416 | orchestrator | 2025-01-16 15:05:44 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:47.076440 | orchestrator | 2025-01-16 15:05:47 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:50.106465 | orchestrator | 2025-01-16 15:05:47 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:05:50.106640 | orchestrator | 2025-01-16 15:05:47 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:05:50.106663 | orchestrator | 2025-01-16 15:05:47 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:50.106720 | orchestrator | 2025-01-16 15:05:50 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:53.125838 | orchestrator | 2025-01-16 15:05:50 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:05:53.125952 | orchestrator | 2025-01-16 15:05:50 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:05:53.125988 | orchestrator | 2025-01-16 15:05:50 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:53.126076 | orchestrator | 2025-01-16 15:05:53 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:56.151562 | orchestrator | 2025-01-16 15:05:53 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:05:56.151715 | orchestrator | 2025-01-16 15:05:53 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:05:56.151736 | orchestrator | 2025-01-16 15:05:53 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:56.151770 | orchestrator | 2025-01-16 15:05:56 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:05:59.175649 | orchestrator | 2025-01-16 15:05:56 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:05:59.175737 | orchestrator | 2025-01-16 15:05:56 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:05:59.175746 | orchestrator | 2025-01-16 15:05:56 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:05:59.175765 | orchestrator | 2025-01-16 15:05:59 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:02.200466 | orchestrator | 2025-01-16 15:05:59 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:02.200673 | orchestrator | 2025-01-16 15:05:59 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:02.200706 | orchestrator | 2025-01-16 15:05:59 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:02.200747 | orchestrator | 2025-01-16 15:06:02 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:05.222763 | orchestrator | 2025-01-16 15:06:02 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:05.222884 | orchestrator | 2025-01-16 15:06:02 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:05.222906 | orchestrator | 2025-01-16 15:06:02 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:05.222941 | orchestrator | 2025-01-16 15:06:05 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:08.242450 | orchestrator | 2025-01-16 15:06:05 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:08.242534 | orchestrator | 2025-01-16 15:06:05 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:08.242561 | orchestrator | 2025-01-16 15:06:05 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:08.242579 | orchestrator | 2025-01-16 15:06:08 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:11.263744 | orchestrator | 2025-01-16 15:06:08 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:11.263867 | orchestrator | 2025-01-16 15:06:08 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:11.263888 | orchestrator | 2025-01-16 15:06:08 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:11.263950 | orchestrator | 2025-01-16 15:06:11 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:11.264057 | orchestrator | 2025-01-16 15:06:11 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:11.264081 | orchestrator | 2025-01-16 15:06:11 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:14.293713 | orchestrator | 2025-01-16 15:06:11 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:14.293853 | orchestrator | 2025-01-16 15:06:14 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:14.293952 | orchestrator | 2025-01-16 15:06:14 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:14.297880 | orchestrator | 2025-01-16 15:06:14 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:14.298119 | orchestrator | 2025-01-16 15:06:14 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:17.320348 | orchestrator | 2025-01-16 15:06:17 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:17.320537 | orchestrator | 2025-01-16 15:06:17 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:17.320562 | orchestrator | 2025-01-16 15:06:17 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:20.338497 | orchestrator | 2025-01-16 15:06:17 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:20.338740 | orchestrator | 2025-01-16 15:06:20 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:20.338927 | orchestrator | 2025-01-16 15:06:20 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:20.339102 | orchestrator | 2025-01-16 15:06:20 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:23.371441 | orchestrator | 2025-01-16 15:06:20 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:23.371541 | orchestrator | 2025-01-16 15:06:23 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:26.399400 | orchestrator | 2025-01-16 15:06:23 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:26.399506 | orchestrator | 2025-01-16 15:06:23 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:26.399522 | orchestrator | 2025-01-16 15:06:23 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:26.399553 | orchestrator | 2025-01-16 15:06:26 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:29.420408 | orchestrator | 2025-01-16 15:06:26 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:29.420530 | orchestrator | 2025-01-16 15:06:26 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:29.420550 | orchestrator | 2025-01-16 15:06:26 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:29.420709 | orchestrator | 2025-01-16 15:06:29 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:32.438091 | orchestrator | 2025-01-16 15:06:29 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:32.438209 | orchestrator | 2025-01-16 15:06:29 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:32.438224 | orchestrator | 2025-01-16 15:06:29 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:32.438251 | orchestrator | 2025-01-16 15:06:32 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:35.462307 | orchestrator | 2025-01-16 15:06:32 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:35.462464 | orchestrator | 2025-01-16 15:06:32 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:35.462486 | orchestrator | 2025-01-16 15:06:32 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:35.462580 | orchestrator | 2025-01-16 15:06:35 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:35.462798 | orchestrator | 2025-01-16 15:06:35 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:35.463188 | orchestrator | 2025-01-16 15:06:35 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:38.489960 | orchestrator | 2025-01-16 15:06:35 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:38.490158 | orchestrator | 2025-01-16 15:06:38 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:41.511182 | orchestrator | 2025-01-16 15:06:38 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:41.511365 | orchestrator | 2025-01-16 15:06:38 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:41.511401 | orchestrator | 2025-01-16 15:06:38 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:41.511448 | orchestrator | 2025-01-16 15:06:41 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:44.534750 | orchestrator | 2025-01-16 15:06:41 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:44.534857 | orchestrator | 2025-01-16 15:06:41 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:44.534870 | orchestrator | 2025-01-16 15:06:41 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:44.534893 | orchestrator | 2025-01-16 15:06:44 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:44.535730 | orchestrator | 2025-01-16 15:06:44 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:44.535755 | orchestrator | 2025-01-16 15:06:44 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:47.555073 | orchestrator | 2025-01-16 15:06:44 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:47.555179 | orchestrator | 2025-01-16 15:06:47 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:47.555430 | orchestrator | 2025-01-16 15:06:47 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:47.555451 | orchestrator | 2025-01-16 15:06:47 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:50.573775 | orchestrator | 2025-01-16 15:06:47 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:50.574715 | orchestrator | 2025-01-16 15:06:50 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:53.599086 | orchestrator | 2025-01-16 15:06:50 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:53.599206 | orchestrator | 2025-01-16 15:06:50 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state STARTED 2025-01-16 15:06:53.599226 | orchestrator | 2025-01-16 15:06:50 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:53.599261 | orchestrator | 2025-01-16 15:06:53 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:53.601224 | orchestrator | 2025-01-16 15:06:53 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:53.601285 | orchestrator | 2025-01-16 15:06:53.601301 | orchestrator | 2025-01-16 15:06:53.601315 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:06:53.601330 | orchestrator | 2025-01-16 15:06:53.601345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:06:53.601376 | orchestrator | Thursday 16 January 2025 15:05:36 +0000 (0:00:00.196) 0:00:00.196 ****** 2025-01-16 15:06:53.601391 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:06:53.601406 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:06:53.601421 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:06:53.601435 | orchestrator | 2025-01-16 15:06:53.601449 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:06:53.601463 | orchestrator | Thursday 16 January 2025 15:05:36 +0000 (0:00:00.225) 0:00:00.421 ****** 2025-01-16 15:06:53.601478 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-01-16 15:06:53.601492 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-01-16 15:06:53.601507 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-01-16 15:06:53.601521 | orchestrator | 2025-01-16 15:06:53.601535 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-01-16 15:06:53.601549 | orchestrator | 2025-01-16 15:06:53.601563 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-01-16 15:06:53.601641 | orchestrator | Thursday 16 January 2025 15:05:37 +0000 (0:00:00.191) 0:00:00.613 ****** 2025-01-16 15:06:53.601658 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:06:53.601672 | orchestrator | 2025-01-16 15:06:53.601686 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-01-16 15:06:53.601700 | orchestrator | Thursday 16 January 2025 15:05:37 +0000 (0:00:00.422) 0:00:01.035 ****** 2025-01-16 15:06:53.601714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-01-16 15:06:53.601728 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-01-16 15:06:53.601742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-01-16 15:06:53.601756 | orchestrator | 2025-01-16 15:06:53.601771 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-01-16 15:06:53.601785 | orchestrator | Thursday 16 January 2025 15:05:38 +0000 (0:00:00.525) 0:00:01.561 ****** 2025-01-16 15:06:53.601803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.601851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.601880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.601898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.601917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.601941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.601957 | orchestrator | 2025-01-16 15:06:53.601973 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-01-16 15:06:53.601988 | orchestrator | Thursday 16 January 2025 15:05:39 +0000 (0:00:01.061) 0:00:02.623 ****** 2025-01-16 15:06:53.602004 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:06:53.602075 | orchestrator | 2025-01-16 15:06:53.602094 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-01-16 15:06:53.602109 | orchestrator | Thursday 16 January 2025 15:05:39 +0000 (0:00:00.486) 0:00:03.109 ****** 2025-01-16 15:06:53.602134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.602151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.602166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.602191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.602215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.602231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.602246 | orchestrator | 2025-01-16 15:06:53.602260 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-01-16 15:06:53.602275 | orchestrator | Thursday 16 January 2025 15:05:41 +0000 (0:00:02.322) 0:00:05.431 ****** 2025-01-16 15:06:53.602290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-01-16 15:06:53.602312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-01-16 15:06:53.602328 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:06:53.602349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-01-16 15:06:53.602365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-01-16 15:06:53.602380 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:06:53.602395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-01-16 15:06:53.602417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-01-16 15:06:53.602433 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:06:53.602447 | orchestrator | 2025-01-16 15:06:53.602461 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-01-16 15:06:53.602476 | orchestrator | Thursday 16 January 2025 15:05:42 +0000 (0:00:00.585) 0:00:06.016 ****** 2025-01-16 15:06:53.602496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-01-16 15:06:53.602512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-01-16 15:06:53.602527 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:06:53.602541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-01-16 15:06:53.602563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-01-16 15:06:53.602596 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:06:53.602611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-01-16 15:06:53.602636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-01-16 15:06:53.602652 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:06:53.602666 | orchestrator | 2025-01-16 15:06:53.602681 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-01-16 15:06:53.602701 | orchestrator | Thursday 16 January 2025 15:05:43 +0000 (0:00:00.931) 0:00:06.948 ****** 2025-01-16 15:06:53.602716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.602732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.602747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.602776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.602793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.602817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.602832 | orchestrator | 2025-01-16 15:06:53.602847 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-01-16 15:06:53.602861 | orchestrator | Thursday 16 January 2025 15:05:45 +0000 (0:00:01.719) 0:00:08.667 ****** 2025-01-16 15:06:53.602875 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:06:53.602889 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:06:53.602903 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:06:53.602917 | orchestrator | 2025-01-16 15:06:53.602932 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-01-16 15:06:53.602946 | orchestrator | Thursday 16 January 2025 15:05:47 +0000 (0:00:02.490) 0:00:11.158 ****** 2025-01-16 15:06:53.602960 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:06:53.602974 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:06:53.602988 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:06:53.603002 | orchestrator | 2025-01-16 15:06:53.603016 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-01-16 15:06:53.603030 | orchestrator | Thursday 16 January 2025 15:05:48 +0000 (0:00:01.298) 0:00:12.456 ****** 2025-01-16 15:06:53.603051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.603067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.603090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-01-16 15:06:53.603105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.603121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.603144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-01-16 15:06:53.603166 | orchestrator | 2025-01-16 15:06:53.603180 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-01-16 15:06:53.603195 | orchestrator | Thursday 16 January 2025 15:05:50 +0000 (0:00:01.881) 0:00:14.338 ****** 2025-01-16 15:06:53.603209 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:06:53.603223 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:06:53.603237 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:06:53.603251 | orchestrator | 2025-01-16 15:06:53.603265 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-01-16 15:06:53.603279 | orchestrator | Thursday 16 January 2025 15:05:51 +0000 (0:00:00.402) 0:00:14.741 ****** 2025-01-16 15:06:53.603293 | orchestrator | 2025-01-16 15:06:53.603307 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-01-16 15:06:53.603322 | orchestrator | Thursday 16 January 2025 15:05:51 +0000 (0:00:00.255) 0:00:14.996 ****** 2025-01-16 15:06:53.603336 | orchestrator | 2025-01-16 15:06:53.603350 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-01-16 15:06:53.603363 | orchestrator | Thursday 16 January 2025 15:05:51 +0000 (0:00:00.091) 0:00:15.088 ****** 2025-01-16 15:06:53.603377 | orchestrator | 2025-01-16 15:06:53.603391 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-01-16 15:06:53.603405 | orchestrator | Thursday 16 January 2025 15:05:51 +0000 (0:00:00.044) 0:00:15.132 ****** 2025-01-16 15:06:53.603418 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:06:53.603432 | orchestrator | 2025-01-16 15:06:53.603446 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-01-16 15:06:53.603460 | orchestrator | Thursday 16 January 2025 15:05:51 +0000 (0:00:00.143) 0:00:15.276 ****** 2025-01-16 15:06:53.603474 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:06:53.603488 | orchestrator | 2025-01-16 15:06:53.603502 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-01-16 15:06:53.603516 | orchestrator | Thursday 16 January 2025 15:05:52 +0000 (0:00:00.558) 0:00:15.835 ****** 2025-01-16 15:06:53.603530 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:06:53.603544 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:06:53.603558 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:06:53.603572 | orchestrator | 2025-01-16 15:06:53.603605 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-01-16 15:06:53.603620 | orchestrator | Thursday 16 January 2025 15:06:11 +0000 (0:00:18.711) 0:00:34.546 ****** 2025-01-16 15:06:53.603634 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:06:53.603648 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:06:53.603668 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:06:53.603682 | orchestrator | 2025-01-16 15:06:53.603696 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-01-16 15:06:53.603710 | orchestrator | Thursday 16 January 2025 15:06:42 +0000 (0:00:31.806) 0:01:06.353 ****** 2025-01-16 15:06:53.603724 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:06:53.603738 | orchestrator | 2025-01-16 15:06:53.603752 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-01-16 15:06:53.603766 | orchestrator | Thursday 16 January 2025 15:06:43 +0000 (0:00:00.708) 0:01:07.061 ****** 2025-01-16 15:06:53.603787 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:06:53.603801 | orchestrator | 2025-01-16 15:06:53.603815 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-01-16 15:06:53.603829 | orchestrator | Thursday 16 January 2025 15:06:45 +0000 (0:00:01.699) 0:01:08.760 ****** 2025-01-16 15:06:53.603843 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:06:53.603857 | orchestrator | 2025-01-16 15:06:53.603871 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-01-16 15:06:53.603885 | orchestrator | Thursday 16 January 2025 15:06:46 +0000 (0:00:01.577) 0:01:10.337 ****** 2025-01-16 15:06:53.603899 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:06:53.603913 | orchestrator | 2025-01-16 15:06:53.603926 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-01-16 15:06:53.603940 | orchestrator | Thursday 16 January 2025 15:06:48 +0000 (0:00:01.868) 0:01:12.206 ****** 2025-01-16 15:06:53.603954 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:06:53.603968 | orchestrator | 2025-01-16 15:06:53.603982 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:06:53.603997 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 15:06:53.604017 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 15:06:56.626785 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 15:06:56.627021 | orchestrator | 2025-01-16 15:06:56.627046 | orchestrator | 2025-01-16 15:06:56.627063 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:06:56.627079 | orchestrator | Thursday 16 January 2025 15:06:50 +0000 (0:00:02.004) 0:01:14.211 ****** 2025-01-16 15:06:56.627093 | orchestrator | =============================================================================== 2025-01-16 15:06:56.627108 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 31.81s 2025-01-16 15:06:56.627122 | orchestrator | opensearch : Restart opensearch container ------------------------------ 18.71s 2025-01-16 15:06:56.627136 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.49s 2025-01-16 15:06:56.627150 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.32s 2025-01-16 15:06:56.627193 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.00s 2025-01-16 15:06:56.627208 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.88s 2025-01-16 15:06:56.627223 | orchestrator | opensearch : Create new log retention policy ---------------------------- 1.87s 2025-01-16 15:06:56.627237 | orchestrator | opensearch : Copying over config.json files for services ---------------- 1.72s 2025-01-16 15:06:56.627251 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 1.70s 2025-01-16 15:06:56.627266 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 1.58s 2025-01-16 15:06:56.627280 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.30s 2025-01-16 15:06:56.627294 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.06s 2025-01-16 15:06:56.627308 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.93s 2025-01-16 15:06:56.627324 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-01-16 15:06:56.627341 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.59s 2025-01-16 15:06:56.627358 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.56s 2025-01-16 15:06:56.627374 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.53s 2025-01-16 15:06:56.627389 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-01-16 15:06:56.627433 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.42s 2025-01-16 15:06:56.627449 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.40s 2025-01-16 15:06:56.627465 | orchestrator | 2025-01-16 15:06:53 | INFO  | Task 167fa523-ffb9-4593-b2ec-ff6b8fc9cc52 is in state SUCCESS 2025-01-16 15:06:56.627482 | orchestrator | 2025-01-16 15:06:53 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:56.627517 | orchestrator | 2025-01-16 15:06:56 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:06:59.644028 | orchestrator | 2025-01-16 15:06:56 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:06:59.644111 | orchestrator | 2025-01-16 15:06:56 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:06:59.644130 | orchestrator | 2025-01-16 15:06:59 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:02.667689 | orchestrator | 2025-01-16 15:06:59 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:02.667812 | orchestrator | 2025-01-16 15:06:59 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:02.667847 | orchestrator | 2025-01-16 15:07:02 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:05.687258 | orchestrator | 2025-01-16 15:07:02 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:05.687357 | orchestrator | 2025-01-16 15:07:02 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:05.687385 | orchestrator | 2025-01-16 15:07:05 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:08.704930 | orchestrator | 2025-01-16 15:07:05 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:08.705016 | orchestrator | 2025-01-16 15:07:05 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:08.705036 | orchestrator | 2025-01-16 15:07:08 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:11.722473 | orchestrator | 2025-01-16 15:07:08 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:11.722605 | orchestrator | 2025-01-16 15:07:08 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:11.722637 | orchestrator | 2025-01-16 15:07:11 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:14.739048 | orchestrator | 2025-01-16 15:07:11 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:14.739185 | orchestrator | 2025-01-16 15:07:11 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:14.739236 | orchestrator | 2025-01-16 15:07:14 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:17.755549 | orchestrator | 2025-01-16 15:07:14 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:17.755738 | orchestrator | 2025-01-16 15:07:14 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:17.755779 | orchestrator | 2025-01-16 15:07:17 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:20.773732 | orchestrator | 2025-01-16 15:07:17 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:20.773876 | orchestrator | 2025-01-16 15:07:17 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:20.773917 | orchestrator | 2025-01-16 15:07:20 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:23.798771 | orchestrator | 2025-01-16 15:07:20 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:23.798883 | orchestrator | 2025-01-16 15:07:20 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:23.798904 | orchestrator | 2025-01-16 15:07:23 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:26.825903 | orchestrator | 2025-01-16 15:07:23 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:26.826067 | orchestrator | 2025-01-16 15:07:23 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:26.826100 | orchestrator | 2025-01-16 15:07:26 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:29.844382 | orchestrator | 2025-01-16 15:07:26 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:29.844496 | orchestrator | 2025-01-16 15:07:26 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:29.844528 | orchestrator | 2025-01-16 15:07:29 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:32.866489 | orchestrator | 2025-01-16 15:07:29 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:32.866674 | orchestrator | 2025-01-16 15:07:29 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:32.866716 | orchestrator | 2025-01-16 15:07:32 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:35.890461 | orchestrator | 2025-01-16 15:07:32 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:35.890545 | orchestrator | 2025-01-16 15:07:32 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:35.890595 | orchestrator | 2025-01-16 15:07:35 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:35.890938 | orchestrator | 2025-01-16 15:07:35 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:38.910124 | orchestrator | 2025-01-16 15:07:35 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:38.910275 | orchestrator | 2025-01-16 15:07:38 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:41.929141 | orchestrator | 2025-01-16 15:07:38 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:41.929228 | orchestrator | 2025-01-16 15:07:38 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:41.929247 | orchestrator | 2025-01-16 15:07:41 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:44.948506 | orchestrator | 2025-01-16 15:07:41 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:44.948626 | orchestrator | 2025-01-16 15:07:41 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:44.948647 | orchestrator | 2025-01-16 15:07:44 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:44.948702 | orchestrator | 2025-01-16 15:07:44 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:47.968462 | orchestrator | 2025-01-16 15:07:44 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:47.968609 | orchestrator | 2025-01-16 15:07:47 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:50.988360 | orchestrator | 2025-01-16 15:07:47 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:50.988543 | orchestrator | 2025-01-16 15:07:47 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:50.988668 | orchestrator | 2025-01-16 15:07:50 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:50.988789 | orchestrator | 2025-01-16 15:07:50 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:54.012014 | orchestrator | 2025-01-16 15:07:50 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:54.012160 | orchestrator | 2025-01-16 15:07:54 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:07:54.012800 | orchestrator | 2025-01-16 15:07:54 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:07:57.033843 | orchestrator | 2025-01-16 15:07:54 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:07:57.033995 | orchestrator | 2025-01-16 15:07:57 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:08:00.049161 | orchestrator | 2025-01-16 15:07:57 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:08:00.049241 | orchestrator | 2025-01-16 15:07:57 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:00.049259 | orchestrator | 2025-01-16 15:08:00 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:08:03.074731 | orchestrator | 2025-01-16 15:08:00 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:08:03.074905 | orchestrator | 2025-01-16 15:08:00 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:03.074947 | orchestrator | 2025-01-16 15:08:03 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:08:06.095910 | orchestrator | 2025-01-16 15:08:03 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:08:06.096034 | orchestrator | 2025-01-16 15:08:03 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:06.096061 | orchestrator | 2025-01-16 15:08:06 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:08:09.118294 | orchestrator | 2025-01-16 15:08:06 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:08:09.118419 | orchestrator | 2025-01-16 15:08:06 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:09.118452 | orchestrator | 2025-01-16 15:08:09 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:08:12.144005 | orchestrator | 2025-01-16 15:08:09 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state STARTED 2025-01-16 15:08:12.144116 | orchestrator | 2025-01-16 15:08:09 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:12.144137 | orchestrator | 2025-01-16 15:08:12 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state STARTED 2025-01-16 15:08:12.147002 | orchestrator | 2025-01-16 15:08:12.147116 | orchestrator | 2025-01-16 15:08:12 | INFO  | Task 437279b8-bd79-45a1-a35c-3dc9f2bc56dd is in state SUCCESS 2025-01-16 15:08:12.147190 | orchestrator | 2025-01-16 15:08:12.147215 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-01-16 15:08:12.147238 | orchestrator | 2025-01-16 15:08:12.147258 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-01-16 15:08:12.147279 | orchestrator | Thursday 16 January 2025 15:05:36 +0000 (0:00:00.092) 0:00:00.092 ****** 2025-01-16 15:08:12.147301 | orchestrator | ok: [localhost] => { 2025-01-16 15:08:12.147322 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-01-16 15:08:12.147342 | orchestrator | } 2025-01-16 15:08:12.147362 | orchestrator | 2025-01-16 15:08:12.147382 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-01-16 15:08:12.147403 | orchestrator | Thursday 16 January 2025 15:05:36 +0000 (0:00:00.024) 0:00:00.116 ****** 2025-01-16 15:08:12.147455 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 1, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-01-16 15:08:12.147477 | orchestrator | ...ignoring 2025-01-16 15:08:12.147497 | orchestrator | 2025-01-16 15:08:12.147519 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-01-16 15:08:12.147539 | orchestrator | Thursday 16 January 2025 15:05:37 +0000 (0:00:01.329) 0:00:01.446 ****** 2025-01-16 15:08:12.147643 | orchestrator | skipping: [localhost] 2025-01-16 15:08:12.147664 | orchestrator | 2025-01-16 15:08:12.147683 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-01-16 15:08:12.147704 | orchestrator | Thursday 16 January 2025 15:05:38 +0000 (0:00:00.027) 0:00:01.473 ****** 2025-01-16 15:08:12.147723 | orchestrator | ok: [localhost] 2025-01-16 15:08:12.147739 | orchestrator | 2025-01-16 15:08:12.147752 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:08:12.147764 | orchestrator | 2025-01-16 15:08:12.147777 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:08:12.147790 | orchestrator | Thursday 16 January 2025 15:05:38 +0000 (0:00:00.089) 0:00:01.562 ****** 2025-01-16 15:08:12.147802 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.147815 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:12.147828 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:12.147917 | orchestrator | 2025-01-16 15:08:12.147944 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:08:12.147964 | orchestrator | Thursday 16 January 2025 15:05:38 +0000 (0:00:00.362) 0:00:01.925 ****** 2025-01-16 15:08:12.147983 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-01-16 15:08:12.147995 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-01-16 15:08:12.148007 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-01-16 15:08:12.148018 | orchestrator | 2025-01-16 15:08:12.148030 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-01-16 15:08:12.148048 | orchestrator | 2025-01-16 15:08:12.148067 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-01-16 15:08:12.148102 | orchestrator | Thursday 16 January 2025 15:05:38 +0000 (0:00:00.327) 0:00:02.252 ****** 2025-01-16 15:08:12.148121 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:08:12.148132 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-01-16 15:08:12.148144 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-01-16 15:08:12.148155 | orchestrator | 2025-01-16 15:08:12.148166 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-01-16 15:08:12.148177 | orchestrator | Thursday 16 January 2025 15:05:39 +0000 (0:00:00.399) 0:00:02.652 ****** 2025-01-16 15:08:12.148189 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:12.148201 | orchestrator | 2025-01-16 15:08:12.148212 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-01-16 15:08:12.148224 | orchestrator | Thursday 16 January 2025 15:05:39 +0000 (0:00:00.390) 0:00:03.042 ****** 2025-01-16 15:08:12.148253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:08:12.148282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:08:12.148296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-01-16 15:08:12.148310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-01-16 15:08:12.148330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:08:12.148349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-01-16 15:08:12.148361 | orchestrator | 2025-01-16 15:08:12.148373 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-01-16 15:08:12.148384 | orchestrator | Thursday 16 January 2025 15:05:42 +0000 (0:00:02.969) 0:00:06.012 ****** 2025-01-16 15:08:12.148396 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.148412 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.148431 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.148450 | orchestrator | 2025-01-16 15:08:12.148468 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-01-16 15:08:12.148487 | orchestrator | Thursday 16 January 2025 15:05:43 +0000 (0:00:00.563) 0:00:06.575 ****** 2025-01-16 15:08:12.148507 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.148526 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.148545 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.148595 | orchestrator | 2025-01-16 15:08:12.148608 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-01-16 15:08:12.148619 | orchestrator | Thursday 16 January 2025 15:05:44 +0000 (0:00:01.000) 0:00:07.576 ****** 2025-01-16 15:08:12.148639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:08:12.148660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:08:12.148673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:08:12.148699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-01-16 15:08:12.148712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-01-16 15:08:12.148725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-01-16 15:08:12.148736 | orchestrator | 2025-01-16 15:08:12.148748 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-01-16 15:08:12.148759 | orchestrator | Thursday 16 January 2025 15:05:48 +0000 (0:00:03.891) 0:00:11.467 ****** 2025-01-16 15:08:12.148771 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.148782 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.148794 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.148805 | orchestrator | 2025-01-16 15:08:12.148816 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-01-16 15:08:12.148827 | orchestrator | Thursday 16 January 2025 15:05:48 +0000 (0:00:00.755) 0:00:12.222 ****** 2025-01-16 15:08:12.148839 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:12.148850 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:12.148861 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.148872 | orchestrator | 2025-01-16 15:08:12.148892 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-01-16 15:08:12.148911 | orchestrator | Thursday 16 January 2025 15:05:54 +0000 (0:00:05.908) 0:00:18.131 ****** 2025-01-16 15:08:12.148930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:08:12.148971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:08:12.148996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-01-16 15:08:12.149027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-01-16 15:08:12.149051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-01-16 15:08:12.149064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-01-16 15:08:12.149075 | orchestrator | 2025-01-16 15:08:12.149087 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-01-16 15:08:12.149098 | orchestrator | Thursday 16 January 2025 15:05:57 +0000 (0:00:03.300) 0:00:21.432 ****** 2025-01-16 15:08:12.149109 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.149121 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:12.149132 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:12.149143 | orchestrator | 2025-01-16 15:08:12.149154 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-01-16 15:08:12.149166 | orchestrator | Thursday 16 January 2025 15:05:58 +0000 (0:00:00.783) 0:00:22.215 ****** 2025-01-16 15:08:12.149177 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.149188 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:12.149200 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:12.149211 | orchestrator | 2025-01-16 15:08:12.149223 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-01-16 15:08:12.149234 | orchestrator | Thursday 16 January 2025 15:05:59 +0000 (0:00:00.267) 0:00:22.482 ****** 2025-01-16 15:08:12.149251 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.149263 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:12.149274 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:12.149285 | orchestrator | 2025-01-16 15:08:12.149297 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-01-16 15:08:12.149308 | orchestrator | Thursday 16 January 2025 15:05:59 +0000 (0:00:00.196) 0:00:22.679 ****** 2025-01-16 15:08:12.149320 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-01-16 15:08:12.149332 | orchestrator | ...ignoring 2025-01-16 15:08:12.149343 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-01-16 15:08:12.149355 | orchestrator | ...ignoring 2025-01-16 15:08:12.149366 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-01-16 15:08:12.149378 | orchestrator | ...ignoring 2025-01-16 15:08:12.149389 | orchestrator | 2025-01-16 15:08:12.149400 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-01-16 15:08:12.149412 | orchestrator | Thursday 16 January 2025 15:06:09 +0000 (0:00:10.654) 0:00:33.334 ****** 2025-01-16 15:08:12.149423 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:12.149434 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.149445 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:12.149456 | orchestrator | 2025-01-16 15:08:12.149467 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-01-16 15:08:12.149478 | orchestrator | Thursday 16 January 2025 15:06:10 +0000 (0:00:00.371) 0:00:33.705 ****** 2025-01-16 15:08:12.149490 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:12.149501 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.149512 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.149525 | orchestrator | 2025-01-16 15:08:12.149545 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-01-16 15:08:12.149589 | orchestrator | Thursday 16 January 2025 15:06:10 +0000 (0:00:00.393) 0:00:34.099 ****** 2025-01-16 15:08:12.149608 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:12.149627 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.149645 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.149662 | orchestrator | 2025-01-16 15:08:12.149681 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-01-16 15:08:12.149709 | orchestrator | Thursday 16 January 2025 15:06:10 +0000 (0:00:00.273) 0:00:34.372 ****** 2025-01-16 15:08:12.149728 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:12.149748 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.149763 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.149782 | orchestrator | 2025-01-16 15:08:12.149801 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-01-16 15:08:12.149820 | orchestrator | Thursday 16 January 2025 15:06:11 +0000 (0:00:00.387) 0:00:34.760 ****** 2025-01-16 15:08:12.149839 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.149859 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:12.149878 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:12.149896 | orchestrator | 2025-01-16 15:08:12.149908 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-01-16 15:08:12.149919 | orchestrator | Thursday 16 January 2025 15:06:11 +0000 (0:00:00.606) 0:00:35.366 ****** 2025-01-16 15:08:12.149931 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.149961 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:12.149981 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.150000 | orchestrator | 2025-01-16 15:08:12.150075 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-01-16 15:08:12.150091 | orchestrator | Thursday 16 January 2025 15:06:12 +0000 (0:00:00.572) 0:00:35.938 ****** 2025-01-16 15:08:12.150111 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.150123 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.150135 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-01-16 15:08:12.150147 | orchestrator | 2025-01-16 15:08:12.150159 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-01-16 15:08:12.150170 | orchestrator | Thursday 16 January 2025 15:06:12 +0000 (0:00:00.427) 0:00:36.366 ****** 2025-01-16 15:08:12.150181 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.150201 | orchestrator | 2025-01-16 15:08:12.150213 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-01-16 15:08:12.150225 | orchestrator | Thursday 16 January 2025 15:06:21 +0000 (0:00:08.181) 0:00:44.547 ****** 2025-01-16 15:08:12.150237 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.150249 | orchestrator | 2025-01-16 15:08:12.150260 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-01-16 15:08:12.150271 | orchestrator | Thursday 16 January 2025 15:06:21 +0000 (0:00:00.078) 0:00:44.626 ****** 2025-01-16 15:08:12.150283 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:12.150295 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.150306 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.150317 | orchestrator | 2025-01-16 15:08:12.150328 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-01-16 15:08:12.150340 | orchestrator | Thursday 16 January 2025 15:06:21 +0000 (0:00:00.678) 0:00:45.304 ****** 2025-01-16 15:08:12.150351 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.150363 | orchestrator | 2025-01-16 15:08:12.150374 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-01-16 15:08:12.150385 | orchestrator | Thursday 16 January 2025 15:06:27 +0000 (0:00:05.785) 0:00:51.090 ****** 2025-01-16 15:08:12.150396 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-01-16 15:08:12.150408 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.150419 | orchestrator | 2025-01-16 15:08:12.150431 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-01-16 15:08:12.150442 | orchestrator | Thursday 16 January 2025 15:06:34 +0000 (0:00:06.692) 0:00:57.783 ****** 2025-01-16 15:08:12.150453 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.150464 | orchestrator | 2025-01-16 15:08:12.150476 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-01-16 15:08:12.150487 | orchestrator | Thursday 16 January 2025 15:06:36 +0000 (0:00:01.693) 0:00:59.476 ****** 2025-01-16 15:08:12.150498 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.150510 | orchestrator | 2025-01-16 15:08:12.150521 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-01-16 15:08:12.150533 | orchestrator | Thursday 16 January 2025 15:06:36 +0000 (0:00:00.074) 0:00:59.551 ****** 2025-01-16 15:08:12.150544 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:12.150578 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.150598 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.150617 | orchestrator | 2025-01-16 15:08:12.150635 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-01-16 15:08:12.150650 | orchestrator | Thursday 16 January 2025 15:06:36 +0000 (0:00:00.267) 0:00:59.818 ****** 2025-01-16 15:08:12.150662 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:12.150673 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:12.150685 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:12.150696 | orchestrator | 2025-01-16 15:08:12.150707 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-01-16 15:08:12.150718 | orchestrator | Thursday 16 January 2025 15:06:36 +0000 (0:00:00.276) 0:01:00.095 ****** 2025-01-16 15:08:12.150729 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-01-16 15:08:12.150740 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.150751 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:12.150770 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:12.150781 | orchestrator | 2025-01-16 15:08:12.150793 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-01-16 15:08:12.150804 | orchestrator | skipping: no hosts matched 2025-01-16 15:08:12.150816 | orchestrator | 2025-01-16 15:08:12.150827 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-01-16 15:08:12.150838 | orchestrator | 2025-01-16 15:08:12.150849 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-01-16 15:08:12.150878 | orchestrator | Thursday 16 January 2025 15:06:50 +0000 (0:00:14.052) 0:01:14.148 ****** 2025-01-16 15:08:12.150898 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:12.150916 | orchestrator | 2025-01-16 15:08:12.150935 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-01-16 15:08:12.150955 | orchestrator | Thursday 16 January 2025 15:07:04 +0000 (0:00:14.202) 0:01:28.351 ****** 2025-01-16 15:08:12.150974 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:12.150992 | orchestrator | 2025-01-16 15:08:12.151005 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-01-16 15:08:12.151016 | orchestrator | Thursday 16 January 2025 15:07:18 +0000 (0:00:13.354) 0:01:41.705 ****** 2025-01-16 15:08:12.151027 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:12.151038 | orchestrator | 2025-01-16 15:08:12.151050 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-01-16 15:08:12.151061 | orchestrator | 2025-01-16 15:08:12.151073 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-01-16 15:08:12.151084 | orchestrator | Thursday 16 January 2025 15:07:19 +0000 (0:00:01.553) 0:01:43.259 ****** 2025-01-16 15:08:12.151095 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:12.151106 | orchestrator | 2025-01-16 15:08:12.151123 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-01-16 15:08:12.151151 | orchestrator | Thursday 16 January 2025 15:07:34 +0000 (0:00:14.431) 0:01:57.690 ****** 2025-01-16 15:08:12.151172 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:12.151189 | orchestrator | 2025-01-16 15:08:12.151208 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-01-16 15:08:12.151220 | orchestrator | Thursday 16 January 2025 15:07:47 +0000 (0:00:13.337) 0:02:11.028 ****** 2025-01-16 15:08:12.151232 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:12.151243 | orchestrator | 2025-01-16 15:08:12.151255 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-01-16 15:08:12.151266 | orchestrator | 2025-01-16 15:08:12.151277 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-01-16 15:08:12.151289 | orchestrator | Thursday 16 January 2025 15:07:49 +0000 (0:00:01.597) 0:02:12.626 ****** 2025-01-16 15:08:12.151300 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.151311 | orchestrator | 2025-01-16 15:08:12.151322 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-01-16 15:08:12.151335 | orchestrator | Thursday 16 January 2025 15:07:57 +0000 (0:00:08.320) 0:02:20.946 ****** 2025-01-16 15:08:12.151354 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.151372 | orchestrator | 2025-01-16 15:08:12.151389 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-01-16 15:08:12.151407 | orchestrator | Thursday 16 January 2025 15:08:00 +0000 (0:00:03.345) 0:02:24.292 ****** 2025-01-16 15:08:12.151427 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.151447 | orchestrator | 2025-01-16 15:08:12.151465 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-01-16 15:08:12.151484 | orchestrator | 2025-01-16 15:08:12.151501 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-01-16 15:08:12.151521 | orchestrator | Thursday 16 January 2025 15:08:02 +0000 (0:00:01.584) 0:02:25.877 ****** 2025-01-16 15:08:12.151540 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:12.151579 | orchestrator | 2025-01-16 15:08:12.151598 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-01-16 15:08:12.151623 | orchestrator | Thursday 16 January 2025 15:08:02 +0000 (0:00:00.444) 0:02:26.322 ****** 2025-01-16 15:08:12.151636 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.151654 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.151673 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.151690 | orchestrator | 2025-01-16 15:08:12.151708 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-01-16 15:08:12.151726 | orchestrator | Thursday 16 January 2025 15:08:04 +0000 (0:00:01.637) 0:02:27.959 ****** 2025-01-16 15:08:12.151744 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.151762 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.151780 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.151798 | orchestrator | 2025-01-16 15:08:12.151816 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-01-16 15:08:12.151832 | orchestrator | Thursday 16 January 2025 15:08:05 +0000 (0:00:01.398) 0:02:29.358 ****** 2025-01-16 15:08:12.151848 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.151865 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.151882 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.151900 | orchestrator | 2025-01-16 15:08:12.151918 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-01-16 15:08:12.151936 | orchestrator | Thursday 16 January 2025 15:08:07 +0000 (0:00:01.558) 0:02:30.916 ****** 2025-01-16 15:08:12.151954 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.151973 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.151992 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:12.152011 | orchestrator | 2025-01-16 15:08:12.152039 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-01-16 15:08:12.152058 | orchestrator | Thursday 16 January 2025 15:08:08 +0000 (0:00:01.502) 0:02:32.419 ****** 2025-01-16 15:08:12.152075 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:12.152091 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:12.152109 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:12.152125 | orchestrator | 2025-01-16 15:08:12.152141 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-01-16 15:08:12.152159 | orchestrator | Thursday 16 January 2025 15:08:11 +0000 (0:00:02.289) 0:02:34.708 ****** 2025-01-16 15:08:12.152175 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:12.152191 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:12.152209 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:12.152227 | orchestrator | 2025-01-16 15:08:12.152243 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:08:12.152261 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-01-16 15:08:12.152287 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-01-16 15:08:12.152307 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-01-16 15:08:12.152460 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-01-16 15:08:12.152482 | orchestrator | 2025-01-16 15:08:12.152499 | orchestrator | 2025-01-16 15:08:12.152516 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:08:12.152535 | orchestrator | Thursday 16 January 2025 15:08:11 +0000 (0:00:00.223) 0:02:34.931 ****** 2025-01-16 15:08:12.152581 | orchestrator | =============================================================================== 2025-01-16 15:08:12.152603 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 28.63s 2025-01-16 15:08:12.152623 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.69s 2025-01-16 15:08:12.152676 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 14.05s 2025-01-16 15:08:15.172250 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.65s 2025-01-16 15:08:15.172339 | orchestrator | mariadb : Restart MariaDB container ------------------------------------- 8.32s 2025-01-16 15:08:15.172348 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 8.18s 2025-01-16 15:08:15.172356 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 6.69s 2025-01-16 15:08:15.172363 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.91s 2025-01-16 15:08:15.172369 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 5.79s 2025-01-16 15:08:15.172376 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.89s 2025-01-16 15:08:15.172382 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 3.35s 2025-01-16 15:08:15.172388 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.30s 2025-01-16 15:08:15.172394 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.15s 2025-01-16 15:08:15.172400 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.97s 2025-01-16 15:08:15.172406 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.29s 2025-01-16 15:08:15.172412 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 1.69s 2025-01-16 15:08:15.172418 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 1.64s 2025-01-16 15:08:15.172424 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 1.58s 2025-01-16 15:08:15.172430 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 1.56s 2025-01-16 15:08:15.172437 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 1.50s 2025-01-16 15:08:15.172443 | orchestrator | 2025-01-16 15:08:12 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:15.172462 | orchestrator | 2025-01-16 15:08:15 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:15.176021 | orchestrator | 2025-01-16 15:08:15 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:15.176182 | orchestrator | 2025-01-16 15:08:15.177180 | orchestrator | 2025-01-16 15:08:15 | INFO  | Task 7b49e08b-a0a5-4f7b-9034-d0a5e17e03b6 is in state SUCCESS 2025-01-16 15:08:15.177222 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-01-16 15:08:15.177245 | orchestrator | 2025-01-16 15:08:15.177253 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-01-16 15:08:15.177263 | orchestrator | 2025-01-16 15:08:15.177271 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-01-16 15:08:15.177279 | orchestrator | Thursday 16 January 2025 14:58:42 +0000 (0:00:01.313) 0:00:01.313 ****** 2025-01-16 15:08:15.177289 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.177299 | orchestrator | 2025-01-16 15:08:15.177311 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-01-16 15:08:15.177319 | orchestrator | Thursday 16 January 2025 14:58:43 +0000 (0:00:01.094) 0:00:02.407 ****** 2025-01-16 15:08:15.177328 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-01-16 15:08:15.177337 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-01-16 15:08:15.177345 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-01-16 15:08:15.177353 | orchestrator | 2025-01-16 15:08:15.177362 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-01-16 15:08:15.177370 | orchestrator | Thursday 16 January 2025 14:58:44 +0000 (0:00:00.588) 0:00:02.995 ****** 2025-01-16 15:08:15.177394 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.177403 | orchestrator | 2025-01-16 15:08:15.177411 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-01-16 15:08:15.177418 | orchestrator | Thursday 16 January 2025 14:58:45 +0000 (0:00:01.069) 0:00:04.064 ****** 2025-01-16 15:08:15.177426 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.177435 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.177442 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.177450 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.177457 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.177468 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.177532 | orchestrator | 2025-01-16 15:08:15.177544 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-01-16 15:08:15.177597 | orchestrator | Thursday 16 January 2025 14:58:46 +0000 (0:00:01.135) 0:00:05.200 ****** 2025-01-16 15:08:15.177614 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.177626 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.177638 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.177648 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.177656 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.177663 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.177671 | orchestrator | 2025-01-16 15:08:15.177679 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-01-16 15:08:15.177686 | orchestrator | Thursday 16 January 2025 14:58:47 +0000 (0:00:00.808) 0:00:06.008 ****** 2025-01-16 15:08:15.177694 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.177702 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.177709 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.177717 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.177724 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.177732 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.177739 | orchestrator | 2025-01-16 15:08:15.177747 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-01-16 15:08:15.177754 | orchestrator | Thursday 16 January 2025 14:58:48 +0000 (0:00:00.976) 0:00:06.984 ****** 2025-01-16 15:08:15.177762 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.177769 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.177777 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.177784 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.177792 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.177800 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.177809 | orchestrator | 2025-01-16 15:08:15.177817 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-01-16 15:08:15.177826 | orchestrator | Thursday 16 January 2025 14:58:49 +0000 (0:00:00.772) 0:00:07.756 ****** 2025-01-16 15:08:15.177837 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.177849 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.177862 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.177874 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.177887 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.177898 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.177907 | orchestrator | 2025-01-16 15:08:15.177916 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-01-16 15:08:15.177925 | orchestrator | Thursday 16 January 2025 14:58:49 +0000 (0:00:00.620) 0:00:08.377 ****** 2025-01-16 15:08:15.177933 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.177942 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.177950 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.177958 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.177966 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.177975 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.177983 | orchestrator | 2025-01-16 15:08:15.177992 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-01-16 15:08:15.178000 | orchestrator | Thursday 16 January 2025 14:58:50 +0000 (0:00:00.894) 0:00:09.272 ****** 2025-01-16 15:08:15.178056 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.178068 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.178075 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.178083 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.178091 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.178099 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.178106 | orchestrator | 2025-01-16 15:08:15.178114 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-01-16 15:08:15.178121 | orchestrator | Thursday 16 January 2025 14:58:51 +0000 (0:00:00.597) 0:00:09.869 ****** 2025-01-16 15:08:15.178130 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.178138 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.178623 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.178650 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.178659 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.178667 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.178675 | orchestrator | 2025-01-16 15:08:15.178707 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-01-16 15:08:15.178716 | orchestrator | Thursday 16 January 2025 14:58:51 +0000 (0:00:00.605) 0:00:10.474 ****** 2025-01-16 15:08:15.178725 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-01-16 15:08:15.178733 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:08:15.178741 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:08:15.178749 | orchestrator | 2025-01-16 15:08:15.178758 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-01-16 15:08:15.178766 | orchestrator | Thursday 16 January 2025 14:58:52 +0000 (0:00:00.606) 0:00:11.081 ****** 2025-01-16 15:08:15.178774 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.178782 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.178790 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.178798 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.178806 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.178814 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.178822 | orchestrator | 2025-01-16 15:08:15.178830 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-01-16 15:08:15.178838 | orchestrator | Thursday 16 January 2025 14:58:53 +0000 (0:00:01.044) 0:00:12.125 ****** 2025-01-16 15:08:15.178845 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-01-16 15:08:15.178853 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:08:15.178861 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:08:15.178870 | orchestrator | 2025-01-16 15:08:15.178885 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-01-16 15:08:15.178894 | orchestrator | Thursday 16 January 2025 14:58:55 +0000 (0:00:01.894) 0:00:14.020 ****** 2025-01-16 15:08:15.178902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:08:15.178910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:08:15.178918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:08:15.178927 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.178935 | orchestrator | 2025-01-16 15:08:15.178944 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-01-16 15:08:15.179041 | orchestrator | Thursday 16 January 2025 14:58:55 +0000 (0:00:00.520) 0:00:14.541 ****** 2025-01-16 15:08:15.179054 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-01-16 15:08:15.179066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-01-16 15:08:15.179083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-01-16 15:08:15.179092 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.179104 | orchestrator | 2025-01-16 15:08:15.179112 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-01-16 15:08:15.179120 | orchestrator | Thursday 16 January 2025 14:58:56 +0000 (0:00:00.753) 0:00:15.295 ****** 2025-01-16 15:08:15.179129 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-01-16 15:08:15.179142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-01-16 15:08:15.179150 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-01-16 15:08:15.179158 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.179167 | orchestrator | 2025-01-16 15:08:15.179175 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-01-16 15:08:15.179202 | orchestrator | Thursday 16 January 2025 14:58:56 +0000 (0:00:00.168) 0:00:15.463 ****** 2025-01-16 15:08:15.179215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-01-16 14:58:54.089431', 'end': '2025-01-16 14:58:54.267887', 'delta': '0:00:00.178456', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-01-16 15:08:15.179229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-01-16 14:58:54.667488', 'end': '2025-01-16 14:58:54.802821', 'delta': '0:00:00.135333', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-01-16 15:08:15.179239 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-01-16 14:58:55.198409', 'end': '2025-01-16 14:58:55.337744', 'delta': '0:00:00.139335', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-01-16 15:08:15.179253 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.179461 | orchestrator | 2025-01-16 15:08:15.179470 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-01-16 15:08:15.179479 | orchestrator | Thursday 16 January 2025 14:58:57 +0000 (0:00:00.205) 0:00:15.669 ****** 2025-01-16 15:08:15.179487 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.179495 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.179503 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.179511 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.179519 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.179527 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.179535 | orchestrator | 2025-01-16 15:08:15.179543 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-01-16 15:08:15.179586 | orchestrator | Thursday 16 January 2025 14:58:58 +0000 (0:00:01.277) 0:00:16.947 ****** 2025-01-16 15:08:15.179596 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.179604 | orchestrator | 2025-01-16 15:08:15.179612 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-01-16 15:08:15.179620 | orchestrator | Thursday 16 January 2025 14:58:58 +0000 (0:00:00.561) 0:00:17.508 ****** 2025-01-16 15:08:15.179629 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.179641 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.179650 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.179658 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.179666 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.179674 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.179682 | orchestrator | 2025-01-16 15:08:15.179690 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-01-16 15:08:15.179698 | orchestrator | Thursday 16 January 2025 14:58:59 +0000 (0:00:00.601) 0:00:18.109 ****** 2025-01-16 15:08:15.179706 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.179715 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.179723 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.179731 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.179739 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.179747 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.179755 | orchestrator | 2025-01-16 15:08:15.179763 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-01-16 15:08:15.179771 | orchestrator | Thursday 16 January 2025 14:59:00 +0000 (0:00:01.026) 0:00:19.136 ****** 2025-01-16 15:08:15.179779 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.179787 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.179795 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.179803 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.179811 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.179819 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.179827 | orchestrator | 2025-01-16 15:08:15.179835 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-01-16 15:08:15.179862 | orchestrator | Thursday 16 January 2025 14:59:01 +0000 (0:00:00.628) 0:00:19.764 ****** 2025-01-16 15:08:15.179871 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.179880 | orchestrator | 2025-01-16 15:08:15.179888 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-01-16 15:08:15.179896 | orchestrator | Thursday 16 January 2025 14:59:01 +0000 (0:00:00.104) 0:00:19.868 ****** 2025-01-16 15:08:15.179911 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.179919 | orchestrator | 2025-01-16 15:08:15.179928 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-01-16 15:08:15.179936 | orchestrator | Thursday 16 January 2025 14:59:01 +0000 (0:00:00.219) 0:00:20.087 ****** 2025-01-16 15:08:15.179944 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.179952 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.179960 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.179968 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.179976 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.179984 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.179992 | orchestrator | 2025-01-16 15:08:15.180001 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-01-16 15:08:15.180009 | orchestrator | Thursday 16 January 2025 14:59:02 +0000 (0:00:00.917) 0:00:21.004 ****** 2025-01-16 15:08:15.180017 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.180025 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.180033 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.180041 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.180050 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.180057 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.180065 | orchestrator | 2025-01-16 15:08:15.180074 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-01-16 15:08:15.180082 | orchestrator | Thursday 16 January 2025 14:59:03 +0000 (0:00:00.993) 0:00:21.998 ****** 2025-01-16 15:08:15.180090 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.180098 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.180107 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.180116 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.180125 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.180134 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.180533 | orchestrator | 2025-01-16 15:08:15.180546 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-01-16 15:08:15.180581 | orchestrator | Thursday 16 January 2025 14:59:04 +0000 (0:00:01.026) 0:00:23.024 ****** 2025-01-16 15:08:15.180590 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.180599 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.180608 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.180616 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.180624 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.180638 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.180646 | orchestrator | 2025-01-16 15:08:15.180723 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-01-16 15:08:15.180737 | orchestrator | Thursday 16 January 2025 14:59:05 +0000 (0:00:00.763) 0:00:23.788 ****** 2025-01-16 15:08:15.180746 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.180755 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.180764 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.180773 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.180781 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.180790 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.180799 | orchestrator | 2025-01-16 15:08:15.180808 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-01-16 15:08:15.180817 | orchestrator | Thursday 16 January 2025 14:59:06 +0000 (0:00:01.077) 0:00:24.865 ****** 2025-01-16 15:08:15.180825 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.180834 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.180842 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.180851 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.180860 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.180869 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.180877 | orchestrator | 2025-01-16 15:08:15.180886 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-01-16 15:08:15.180903 | orchestrator | Thursday 16 January 2025 14:59:07 +0000 (0:00:00.749) 0:00:25.615 ****** 2025-01-16 15:08:15.180912 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.180920 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.180929 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.180938 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.180946 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.180955 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.180964 | orchestrator | 2025-01-16 15:08:15.180973 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-01-16 15:08:15.180981 | orchestrator | Thursday 16 January 2025 14:59:08 +0000 (0:00:01.069) 0:00:26.685 ****** 2025-01-16 15:08:15.180992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53488163--bd74--50cc--bfa0--f1a94ed01f33-osd--block--53488163--bd74--50cc--bfa0--f1a94ed01f33', 'dm-uuid-LVM-N0DQnPOx7vvMZ9gWckNqcQrXVN0ofw1Usc1jR19jN1dhrkIszLuXtjetQJiA4xdI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--562c7eeb--0cc2--5747--a030--082dcf3dd7cc-osd--block--562c7eeb--0cc2--5747--a030--082dcf3dd7cc', 'dm-uuid-LVM-TMfc5lZ2pMOOsxqCan5tJSpOeCg5GjY2kDWg0LqkFwPvxesaptTE5VSNzRCW2Kxy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part1', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part14', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part15', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part16', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--53488163--bd74--50cc--bfa0--f1a94ed01f33-osd--block--53488163--bd74--50cc--bfa0--f1a94ed01f33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2teqzv-jqXh-sIIm-pvj1-H7Ld-gSnN-nKf75c', 'scsi-0QEMU_QEMU_HARDDISK_a3fa75ed-12ad-4d98-b1e3-06058efbf95a', 'scsi-SQEMU_QEMU_HARDDISK_a3fa75ed-12ad-4d98-b1e3-06058efbf95a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--562c7eeb--0cc2--5747--a030--082dcf3dd7cc-osd--block--562c7eeb--0cc2--5747--a030--082dcf3dd7cc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0InOC3-vNqk-jV0S-t0JO-vnVc-lLEY-QjyWks', 'scsi-0QEMU_QEMU_HARDDISK_0646438b-3566-4bd7-ac9f-c7444a60ff3f', 'scsi-SQEMU_QEMU_HARDDISK_0646438b-3566-4bd7-ac9f-c7444a60ff3f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72b30f3d-ea4f-4fbe-a722-d77662b0ee19', 'scsi-SQEMU_QEMU_HARDDISK_72b30f3d-ea4f-4fbe-a722-d77662b0ee19'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9c27d09--d80a--5255--9afb--1d5e2e5f2f02-osd--block--d9c27d09--d80a--5255--9afb--1d5e2e5f2f02', 'dm-uuid-LVM-52fmO9JMV2PHItuTk1y42oGRchjxoLr0T1j2CunOsF0BHFet4TO5M5WcuEoiy6B0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e6463fb--b573--5867--8a5d--b884b3259bdd-osd--block--9e6463fb--b573--5867--8a5d--b884b3259bdd', 'dm-uuid-LVM-VYijzicZ0f6Xa169M8PRLFLtTHFmowc6HmGHoyV9rgPdZjl2PGHRQRDdzdzbYVju'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181667 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.181676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part1', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part14', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part15', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part16', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d9c27d09--d80a--5255--9afb--1d5e2e5f2f02-osd--block--d9c27d09--d80a--5255--9afb--1d5e2e5f2f02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0Exx1e-SsRy-1z32-lkak-o2Hz-xAbz-hZptiH', 'scsi-0QEMU_QEMU_HARDDISK_d1e8c7e9-38c3-4780-8ab7-178f632f9eb8', 'scsi-SQEMU_QEMU_HARDDISK_d1e8c7e9-38c3-4780-8ab7-178f632f9eb8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9e6463fb--b573--5867--8a5d--b884b3259bdd-osd--block--9e6463fb--b573--5867--8a5d--b884b3259bdd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-32OxvT-YHAF-QogR-Ot0d-6QSn-yngZ-VRUPce', 'scsi-0QEMU_QEMU_HARDDISK_511497a6-ce11-47ca-8c02-acccaddecbc9', 'scsi-SQEMU_QEMU_HARDDISK_511497a6-ce11-47ca-8c02-acccaddecbc9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7bd705e-b5e0-4446-bf55-1dfa4188ee04', 'scsi-SQEMU_QEMU_HARDDISK_f7bd705e-b5e0-4446-bf55-1dfa4188ee04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.181864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53007ac5--07c2--53cd--add6--e57729925218-osd--block--53007ac5--07c2--53cd--add6--e57729925218', 'dm-uuid-LVM-oQCheSm9KrUUJMm82iuOynV8eiWIUV1TGYoi3INQOEORaMRkjHi2UpRkGgiaqKDU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54c8019f--0033--5b40--9c4f--7f2e43f78b89-osd--block--54c8019f--0033--5b40--9c4f--7f2e43f78b89', 'dm-uuid-LVM-yqDYE7gr3xSGUfJQD2Za48Kd0b3UBZB6F1ZEjw2Yw9onm42m2LOC3Xclx5lIdmVf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.181995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--53007ac5--07c2--53cd--add6--e57729925218-osd--block--53007ac5--07c2--53cd--add6--e57729925218'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pnrdw0-BkoA-7kIZ-iZct-FPdw-sR1f-noqHVh', 'scsi-0QEMU_QEMU_HARDDISK_0aac5059-2a3a-4141-840f-fb09a7465e72', 'scsi-SQEMU_QEMU_HARDDISK_0aac5059-2a3a-4141-840f-fb09a7465e72'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182113 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.182122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--54c8019f--0033--5b40--9c4f--7f2e43f78b89-osd--block--54c8019f--0033--5b40--9c4f--7f2e43f78b89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-31xuQ3-lGqv-j3fr-wAiK-iSrf-pEcD-SUKWpi', 'scsi-0QEMU_QEMU_HARDDISK_97685de2-31d7-40a6-8026-91294c9f6af1', 'scsi-SQEMU_QEMU_HARDDISK_97685de2-31d7-40a6-8026-91294c9f6af1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d740be6b-1b5d-4ad1-85aa-7275c0983c2d', 'scsi-SQEMU_QEMU_HARDDISK_d740be6b-1b5d-4ad1-85aa-7275c0983c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182758 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.182872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32889f36-f55b-4b84-b5ce-98c4b6c26bc3', 'scsi-SQEMU_QEMU_HARDDISK_32889f36-f55b-4b84-b5ce-98c4b6c26bc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ee2823-701b-4f46-84dc-c0a96e4e2751', 'scsi-SQEMU_QEMU_HARDDISK_a8ee2823-701b-4f46-84dc-c0a96e4e2751'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08ea27e5-46c9-491a-b107-4789383846f8', 'scsi-SQEMU_QEMU_HARDDISK_08ea27e5-46c9-491a-b107-4789383846f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.182952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.182962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a', 'scsi-SQEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd92237a-eb9b-415b-89cb-c6ad5949ce4a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ec46ceb-bae7-4572-9a93-049002478163', 'scsi-SQEMU_QEMU_HARDDISK_5ec46ceb-bae7-4572-9a93-049002478163'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a1dc0d1a-c1df-431a-8f1d-6c726706706a', 'scsi-SQEMU_QEMU_HARDDISK_a1dc0d1a-c1df-431a-8f1d-6c726706706a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183531 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.183541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e08297a-47bf-4aeb-9f10-e9a4d07161c8', 'scsi-SQEMU_QEMU_HARDDISK_2e08297a-47bf-4aeb-9f10-e9a4d07161c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183615 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.183625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:08:15.183796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff', 'scsi-SQEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff-part1', 'scsi-SQEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff-part14', 'scsi-SQEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff-part15', 'scsi-SQEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff-part16', 'scsi-SQEMU_QEMU_HARDDISK_1282efbc-d021-4c81-8029-0b4a449576ff-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20cc569f-32be-418c-b198-01024ddefd54', 'scsi-SQEMU_QEMU_HARDDISK_20cc569f-32be-418c-b198-01024ddefd54'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_18d7b00d-e0db-4cba-a669-4d06ca6689ec', 'scsi-SQEMU_QEMU_HARDDISK_18d7b00d-e0db-4cba-a669-4d06ca6689ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_144e2d31-b1a0-45c5-bee7-951938d47a21', 'scsi-SQEMU_QEMU_HARDDISK_144e2d31-b1a0-45c5-bee7-951938d47a21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:08:15.183909 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.183919 | orchestrator | 2025-01-16 15:08:15.183929 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-01-16 15:08:15.183939 | orchestrator | Thursday 16 January 2025 14:59:09 +0000 (0:00:01.330) 0:00:28.015 ****** 2025-01-16 15:08:15.183948 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.183958 | orchestrator | 2025-01-16 15:08:15.183967 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-01-16 15:08:15.183976 | orchestrator | Thursday 16 January 2025 14:59:10 +0000 (0:00:00.812) 0:00:28.828 ****** 2025-01-16 15:08:15.183985 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.183995 | orchestrator | 2025-01-16 15:08:15.184004 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-01-16 15:08:15.184013 | orchestrator | Thursday 16 January 2025 14:59:10 +0000 (0:00:00.176) 0:00:29.005 ****** 2025-01-16 15:08:15.184023 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.184032 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.184041 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.184050 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.184059 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.184081 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.184092 | orchestrator | 2025-01-16 15:08:15.184102 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-01-16 15:08:15.184112 | orchestrator | Thursday 16 January 2025 14:59:11 +0000 (0:00:00.803) 0:00:29.808 ****** 2025-01-16 15:08:15.184130 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.184141 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.184150 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.184160 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.184170 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.184179 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.184189 | orchestrator | 2025-01-16 15:08:15.184199 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-01-16 15:08:15.184209 | orchestrator | Thursday 16 January 2025 14:59:12 +0000 (0:00:01.508) 0:00:31.317 ****** 2025-01-16 15:08:15.184218 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.184228 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.184238 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.184247 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.184257 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.184267 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.184276 | orchestrator | 2025-01-16 15:08:15.184286 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-01-16 15:08:15.184296 | orchestrator | Thursday 16 January 2025 14:59:13 +0000 (0:00:00.692) 0:00:32.009 ****** 2025-01-16 15:08:15.184306 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.184372 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.184387 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.184397 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.184407 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.184417 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.184427 | orchestrator | 2025-01-16 15:08:15.184436 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-01-16 15:08:15.184446 | orchestrator | Thursday 16 January 2025 14:59:14 +0000 (0:00:00.861) 0:00:32.870 ****** 2025-01-16 15:08:15.184456 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.184467 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.184479 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.184490 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.184500 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.184509 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.184519 | orchestrator | 2025-01-16 15:08:15.184529 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-01-16 15:08:15.184539 | orchestrator | Thursday 16 January 2025 14:59:15 +0000 (0:00:00.744) 0:00:33.615 ****** 2025-01-16 15:08:15.184548 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.184577 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.184587 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.184596 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.184606 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.184615 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.184624 | orchestrator | 2025-01-16 15:08:15.184633 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-01-16 15:08:15.184643 | orchestrator | Thursday 16 January 2025 14:59:16 +0000 (0:00:01.173) 0:00:34.789 ****** 2025-01-16 15:08:15.184652 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.184662 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.184671 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.184681 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.184690 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.184700 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.184709 | orchestrator | 2025-01-16 15:08:15.184719 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-01-16 15:08:15.184728 | orchestrator | Thursday 16 January 2025 14:59:17 +0000 (0:00:01.008) 0:00:35.797 ****** 2025-01-16 15:08:15.184737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:08:15.184747 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-01-16 15:08:15.184757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:08:15.184776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-01-16 15:08:15.184785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:08:15.184795 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-01-16 15:08:15.184805 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-01-16 15:08:15.184814 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.184823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:08:15.184833 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-01-16 15:08:15.184842 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.184852 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-01-16 15:08:15.184861 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.184871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:08:15.184880 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-01-16 15:08:15.184890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:08:15.184899 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.184908 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-01-16 15:08:15.184918 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-01-16 15:08:15.184927 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-01-16 15:08:15.184936 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.184946 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-01-16 15:08:15.184955 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-01-16 15:08:15.184964 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.184974 | orchestrator | 2025-01-16 15:08:15.184983 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-01-16 15:08:15.184992 | orchestrator | Thursday 16 January 2025 14:59:20 +0000 (0:00:02.855) 0:00:38.653 ****** 2025-01-16 15:08:15.185002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:08:15.185014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:08:15.185024 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-01-16 15:08:15.185035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:08:15.185045 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.185056 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-01-16 15:08:15.185066 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-01-16 15:08:15.185077 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:08:15.185087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:08:15.185098 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-01-16 15:08:15.185109 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-01-16 15:08:15.185119 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.185130 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:08:15.185141 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.185151 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-01-16 15:08:15.185161 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-01-16 15:08:15.185172 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.185183 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-01-16 15:08:15.185252 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-01-16 15:08:15.185267 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-01-16 15:08:15.185278 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.185288 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-01-16 15:08:15.185299 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-01-16 15:08:15.185316 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.185333 | orchestrator | 2025-01-16 15:08:15.185344 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-01-16 15:08:15.185354 | orchestrator | Thursday 16 January 2025 14:59:21 +0000 (0:00:01.707) 0:00:40.360 ****** 2025-01-16 15:08:15.185366 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-01-16 15:08:15.185377 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-01-16 15:08:15.185391 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-01-16 15:08:15.185403 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:08:15.185412 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-01-16 15:08:15.185422 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-01-16 15:08:15.185432 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-01-16 15:08:15.185442 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-01-16 15:08:15.185452 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-01-16 15:08:15.185462 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-01-16 15:08:15.185472 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-01-16 15:08:15.185481 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-01-16 15:08:15.185491 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-01-16 15:08:15.185501 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-01-16 15:08:15.185511 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-01-16 15:08:15.185521 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-01-16 15:08:15.185530 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-01-16 15:08:15.185611 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-01-16 15:08:15.185630 | orchestrator | 2025-01-16 15:08:15.185646 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-01-16 15:08:15.185660 | orchestrator | Thursday 16 January 2025 14:59:27 +0000 (0:00:05.211) 0:00:45.571 ****** 2025-01-16 15:08:15.185670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:08:15.185680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:08:15.185689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:08:15.185698 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.185707 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-01-16 15:08:15.185717 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-01-16 15:08:15.185726 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-01-16 15:08:15.185736 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-01-16 15:08:15.185745 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-01-16 15:08:15.185755 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-01-16 15:08:15.185764 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.185773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:08:15.185783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:08:15.185792 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:08:15.185801 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.185811 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-01-16 15:08:15.185820 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-01-16 15:08:15.185829 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-01-16 15:08:15.185838 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.185848 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.185857 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-01-16 15:08:15.185866 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-01-16 15:08:15.185875 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-01-16 15:08:15.185891 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.185900 | orchestrator | 2025-01-16 15:08:15.185909 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-01-16 15:08:15.185919 | orchestrator | Thursday 16 January 2025 14:59:27 +0000 (0:00:00.810) 0:00:46.381 ****** 2025-01-16 15:08:15.185928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:08:15.185938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:08:15.185947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:08:15.185956 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-01-16 15:08:15.185965 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-01-16 15:08:15.185974 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-01-16 15:08:15.185984 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.185993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-01-16 15:08:15.186002 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.186011 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-01-16 15:08:15.186047 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-01-16 15:08:15.186056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:08:15.186066 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:08:15.186075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:08:15.186154 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.186167 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-01-16 15:08:15.186176 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.186185 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-01-16 15:08:15.186194 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-01-16 15:08:15.186202 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.186211 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-01-16 15:08:15.186220 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-01-16 15:08:15.186229 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-01-16 15:08:15.186237 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.186251 | orchestrator | 2025-01-16 15:08:15.186260 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-01-16 15:08:15.186268 | orchestrator | Thursday 16 January 2025 14:59:28 +0000 (0:00:00.838) 0:00:47.220 ****** 2025-01-16 15:08:15.186277 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-01-16 15:08:15.186287 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-01-16 15:08:15.186295 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-01-16 15:08:15.186304 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.186313 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-01-16 15:08:15.186322 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-01-16 15:08:15.186331 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-01-16 15:08:15.186339 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.186348 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-01-16 15:08:15.186356 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-01-16 15:08:15.186368 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-01-16 15:08:15.186377 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.186386 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-01-16 15:08:15.186400 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-01-16 15:08:15.186408 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-01-16 15:08:15.186417 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-01-16 15:08:15.186426 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-01-16 15:08:15.186434 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-01-16 15:08:15.186443 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-01-16 15:08:15.186452 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-01-16 15:08:15.186461 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-01-16 15:08:15.186469 | orchestrator | 2025-01-16 15:08:15.186478 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-01-16 15:08:15.186487 | orchestrator | Thursday 16 January 2025 14:59:29 +0000 (0:00:00.780) 0:00:48.000 ****** 2025-01-16 15:08:15.186495 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.186504 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.186513 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.186522 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.186530 | orchestrator | 2025-01-16 15:08:15.186539 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:08:15.186548 | orchestrator | Thursday 16 January 2025 14:59:30 +0000 (0:00:01.297) 0:00:49.298 ****** 2025-01-16 15:08:15.186575 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.186585 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.186594 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.186603 | orchestrator | 2025-01-16 15:08:15.186611 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:08:15.186620 | orchestrator | Thursday 16 January 2025 14:59:31 +0000 (0:00:00.681) 0:00:49.979 ****** 2025-01-16 15:08:15.186629 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.186641 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.186650 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.186659 | orchestrator | 2025-01-16 15:08:15.186667 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:08:15.186676 | orchestrator | Thursday 16 January 2025 14:59:32 +0000 (0:00:00.586) 0:00:50.565 ****** 2025-01-16 15:08:15.186684 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.186693 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.186702 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.186710 | orchestrator | 2025-01-16 15:08:15.186719 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:08:15.186728 | orchestrator | Thursday 16 January 2025 14:59:32 +0000 (0:00:00.778) 0:00:51.343 ****** 2025-01-16 15:08:15.186736 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.186799 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.186812 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.186821 | orchestrator | 2025-01-16 15:08:15.186830 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:08:15.186839 | orchestrator | Thursday 16 January 2025 14:59:33 +0000 (0:00:00.901) 0:00:52.245 ****** 2025-01-16 15:08:15.186848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.186857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.186865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.186880 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.186888 | orchestrator | 2025-01-16 15:08:15.186897 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:08:15.186906 | orchestrator | Thursday 16 January 2025 14:59:34 +0000 (0:00:00.367) 0:00:52.613 ****** 2025-01-16 15:08:15.186914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.186923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.186932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.186940 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.186949 | orchestrator | 2025-01-16 15:08:15.186958 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:08:15.186967 | orchestrator | Thursday 16 January 2025 14:59:34 +0000 (0:00:00.281) 0:00:52.894 ****** 2025-01-16 15:08:15.186975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.186984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.186993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.187002 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.187010 | orchestrator | 2025-01-16 15:08:15.187019 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.187027 | orchestrator | Thursday 16 January 2025 14:59:34 +0000 (0:00:00.492) 0:00:53.386 ****** 2025-01-16 15:08:15.187036 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.187045 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.187053 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.187077 | orchestrator | 2025-01-16 15:08:15.187088 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:08:15.187097 | orchestrator | Thursday 16 January 2025 14:59:35 +0000 (0:00:00.906) 0:00:54.293 ****** 2025-01-16 15:08:15.187107 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-01-16 15:08:15.187116 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-01-16 15:08:15.187126 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-01-16 15:08:15.187135 | orchestrator | 2025-01-16 15:08:15.187144 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:08:15.187154 | orchestrator | Thursday 16 January 2025 14:59:36 +0000 (0:00:00.625) 0:00:54.919 ****** 2025-01-16 15:08:15.187163 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.187172 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.187182 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.187191 | orchestrator | 2025-01-16 15:08:15.187200 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.187210 | orchestrator | Thursday 16 January 2025 14:59:36 +0000 (0:00:00.514) 0:00:55.433 ****** 2025-01-16 15:08:15.187219 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.187229 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.187238 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.187248 | orchestrator | 2025-01-16 15:08:15.187257 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:08:15.187271 | orchestrator | Thursday 16 January 2025 14:59:37 +0000 (0:00:00.575) 0:00:56.009 ****** 2025-01-16 15:08:15.187284 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.187293 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.187303 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.187312 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.187321 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.187331 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.187340 | orchestrator | 2025-01-16 15:08:15.187350 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:08:15.187359 | orchestrator | Thursday 16 January 2025 14:59:38 +0000 (0:00:00.944) 0:00:56.954 ****** 2025-01-16 15:08:15.187368 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.187382 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.187391 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.187400 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.187413 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.187422 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.187430 | orchestrator | 2025-01-16 15:08:15.187439 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:08:15.187448 | orchestrator | Thursday 16 January 2025 14:59:39 +0000 (0:00:00.745) 0:00:57.699 ****** 2025-01-16 15:08:15.187456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.187465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.187474 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-01-16 15:08:15.187483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.187491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-01-16 15:08:15.187500 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.187509 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-01-16 15:08:15.187618 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-01-16 15:08:15.187634 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.187644 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-01-16 15:08:15.187653 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-01-16 15:08:15.187661 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.187670 | orchestrator | 2025-01-16 15:08:15.187679 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-01-16 15:08:15.187688 | orchestrator | Thursday 16 January 2025 14:59:41 +0000 (0:00:02.374) 0:01:00.074 ****** 2025-01-16 15:08:15.187696 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.187705 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.187714 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.187722 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.187731 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.187739 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.187748 | orchestrator | 2025-01-16 15:08:15.187757 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-01-16 15:08:15.187766 | orchestrator | Thursday 16 January 2025 14:59:42 +0000 (0:00:00.567) 0:01:00.642 ****** 2025-01-16 15:08:15.187774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-01-16 15:08:15.187783 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:08:15.187791 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:08:15.187800 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-01-16 15:08:15.187809 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-01-16 15:08:15.187817 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-01-16 15:08:15.187826 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-01-16 15:08:15.187834 | orchestrator | 2025-01-16 15:08:15.187843 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-01-16 15:08:15.187852 | orchestrator | Thursday 16 January 2025 14:59:43 +0000 (0:00:01.617) 0:01:02.259 ****** 2025-01-16 15:08:15.187860 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-01-16 15:08:15.187869 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:08:15.187886 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:08:15.187895 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-01-16 15:08:15.187903 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-01-16 15:08:15.187911 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-01-16 15:08:15.187919 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-01-16 15:08:15.187927 | orchestrator | 2025-01-16 15:08:15.187936 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-01-16 15:08:15.187944 | orchestrator | Thursday 16 January 2025 14:59:46 +0000 (0:00:02.309) 0:01:04.569 ****** 2025-01-16 15:08:15.187953 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.187962 | orchestrator | 2025-01-16 15:08:15.187970 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-01-16 15:08:15.187978 | orchestrator | Thursday 16 January 2025 14:59:47 +0000 (0:00:01.319) 0:01:05.889 ****** 2025-01-16 15:08:15.187986 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.187994 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.188002 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.188010 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.188018 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.188026 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.188034 | orchestrator | 2025-01-16 15:08:15.188042 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-01-16 15:08:15.188050 | orchestrator | Thursday 16 January 2025 14:59:48 +0000 (0:00:01.237) 0:01:07.126 ****** 2025-01-16 15:08:15.188058 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.188066 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.188075 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.188083 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.188091 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.188099 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.188107 | orchestrator | 2025-01-16 15:08:15.188115 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-01-16 15:08:15.188123 | orchestrator | Thursday 16 January 2025 14:59:49 +0000 (0:00:00.861) 0:01:07.988 ****** 2025-01-16 15:08:15.188130 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.188143 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.188151 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.188159 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.188167 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.188176 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.188184 | orchestrator | 2025-01-16 15:08:15.188192 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-01-16 15:08:15.188203 | orchestrator | Thursday 16 January 2025 14:59:50 +0000 (0:00:01.311) 0:01:09.299 ****** 2025-01-16 15:08:15.188212 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.188220 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.188228 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.188236 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.188244 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.188252 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.188260 | orchestrator | 2025-01-16 15:08:15.188268 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-01-16 15:08:15.188327 | orchestrator | Thursday 16 January 2025 14:59:52 +0000 (0:00:01.306) 0:01:10.606 ****** 2025-01-16 15:08:15.188339 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.188347 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.188355 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.188364 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.188377 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.188385 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.188394 | orchestrator | 2025-01-16 15:08:15.188402 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-01-16 15:08:15.188410 | orchestrator | Thursday 16 January 2025 14:59:54 +0000 (0:00:02.144) 0:01:12.750 ****** 2025-01-16 15:08:15.188418 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.188426 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.188434 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.188442 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.188450 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.188458 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.188467 | orchestrator | 2025-01-16 15:08:15.188475 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-01-16 15:08:15.188483 | orchestrator | Thursday 16 January 2025 14:59:54 +0000 (0:00:00.650) 0:01:13.401 ****** 2025-01-16 15:08:15.188491 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.188499 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.188508 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.188516 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.188524 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.188532 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.188540 | orchestrator | 2025-01-16 15:08:15.188548 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-01-16 15:08:15.188571 | orchestrator | Thursday 16 January 2025 14:59:55 +0000 (0:00:00.797) 0:01:14.198 ****** 2025-01-16 15:08:15.188580 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.188588 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.188596 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.188604 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.188627 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.188636 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.188644 | orchestrator | 2025-01-16 15:08:15.188652 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-01-16 15:08:15.188660 | orchestrator | Thursday 16 January 2025 14:59:56 +0000 (0:00:00.651) 0:01:14.850 ****** 2025-01-16 15:08:15.188668 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.188676 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.188684 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.188692 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.188700 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.188708 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.188716 | orchestrator | 2025-01-16 15:08:15.188724 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-01-16 15:08:15.188732 | orchestrator | Thursday 16 January 2025 14:59:57 +0000 (0:00:01.067) 0:01:15.917 ****** 2025-01-16 15:08:15.188740 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.188748 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.188757 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.188764 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.188773 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.188781 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.188788 | orchestrator | 2025-01-16 15:08:15.188796 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-01-16 15:08:15.188805 | orchestrator | Thursday 16 January 2025 14:59:58 +0000 (0:00:00.792) 0:01:16.709 ****** 2025-01-16 15:08:15.188813 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.188821 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.188829 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.188837 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.188849 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.188857 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.188865 | orchestrator | 2025-01-16 15:08:15.188874 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-01-16 15:08:15.188887 | orchestrator | Thursday 16 January 2025 14:59:59 +0000 (0:00:01.677) 0:01:18.387 ****** 2025-01-16 15:08:15.188895 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.188903 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.188911 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.188919 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.188927 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.188935 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.188944 | orchestrator | 2025-01-16 15:08:15.188951 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-01-16 15:08:15.188960 | orchestrator | Thursday 16 January 2025 15:00:00 +0000 (0:00:01.096) 0:01:19.484 ****** 2025-01-16 15:08:15.188968 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.188976 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.188984 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.188992 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.189000 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.189008 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.189016 | orchestrator | 2025-01-16 15:08:15.189024 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-01-16 15:08:15.189032 | orchestrator | Thursday 16 January 2025 15:00:02 +0000 (0:00:01.122) 0:01:20.606 ****** 2025-01-16 15:08:15.189040 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.189048 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.189056 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.189064 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189072 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.189080 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.189089 | orchestrator | 2025-01-16 15:08:15.189097 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-01-16 15:08:15.189105 | orchestrator | Thursday 16 January 2025 15:00:03 +0000 (0:00:00.981) 0:01:21.588 ****** 2025-01-16 15:08:15.189113 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.189121 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.189129 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.189137 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189145 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.189153 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.189161 | orchestrator | 2025-01-16 15:08:15.189221 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-01-16 15:08:15.189233 | orchestrator | Thursday 16 January 2025 15:00:04 +0000 (0:00:01.124) 0:01:22.712 ****** 2025-01-16 15:08:15.189242 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.189251 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.189260 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.189268 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189277 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.189286 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.189294 | orchestrator | 2025-01-16 15:08:15.189303 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-01-16 15:08:15.189312 | orchestrator | Thursday 16 January 2025 15:00:04 +0000 (0:00:00.567) 0:01:23.280 ****** 2025-01-16 15:08:15.189320 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.189329 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.189338 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.189346 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189355 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.189363 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.189372 | orchestrator | 2025-01-16 15:08:15.189380 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-01-16 15:08:15.189389 | orchestrator | Thursday 16 January 2025 15:00:05 +0000 (0:00:00.626) 0:01:23.907 ****** 2025-01-16 15:08:15.189398 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.189406 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.189421 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.189429 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189438 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.189447 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.189455 | orchestrator | 2025-01-16 15:08:15.189464 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-01-16 15:08:15.189473 | orchestrator | Thursday 16 January 2025 15:00:05 +0000 (0:00:00.625) 0:01:24.532 ****** 2025-01-16 15:08:15.189481 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.189490 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.189498 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.189507 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.189516 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.189524 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.189533 | orchestrator | 2025-01-16 15:08:15.189542 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-01-16 15:08:15.189570 | orchestrator | Thursday 16 January 2025 15:00:06 +0000 (0:00:00.807) 0:01:25.340 ****** 2025-01-16 15:08:15.189584 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.189598 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.189616 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.189630 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.189639 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.189646 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.189654 | orchestrator | 2025-01-16 15:08:15.189662 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-01-16 15:08:15.189670 | orchestrator | Thursday 16 January 2025 15:00:07 +0000 (0:00:00.586) 0:01:25.926 ****** 2025-01-16 15:08:15.189678 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.189686 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.189694 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.189702 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189709 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.189717 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.189725 | orchestrator | 2025-01-16 15:08:15.189733 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-01-16 15:08:15.189741 | orchestrator | Thursday 16 January 2025 15:00:08 +0000 (0:00:00.669) 0:01:26.596 ****** 2025-01-16 15:08:15.189748 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.189756 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.189764 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.189772 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189780 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.189788 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.189796 | orchestrator | 2025-01-16 15:08:15.189804 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-01-16 15:08:15.189812 | orchestrator | Thursday 16 January 2025 15:00:08 +0000 (0:00:00.731) 0:01:27.327 ****** 2025-01-16 15:08:15.189820 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.189827 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.189835 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.189843 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189851 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.189859 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.189867 | orchestrator | 2025-01-16 15:08:15.189875 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-01-16 15:08:15.189883 | orchestrator | Thursday 16 January 2025 15:00:09 +0000 (0:00:00.925) 0:01:28.253 ****** 2025-01-16 15:08:15.189891 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.189899 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.189907 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.189915 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189923 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.189935 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.189943 | orchestrator | 2025-01-16 15:08:15.189951 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-01-16 15:08:15.189959 | orchestrator | Thursday 16 January 2025 15:00:10 +0000 (0:00:00.659) 0:01:28.913 ****** 2025-01-16 15:08:15.189967 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.189975 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.189982 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.189990 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.189998 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.190006 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.190014 | orchestrator | 2025-01-16 15:08:15.190050 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-01-16 15:08:15.190058 | orchestrator | Thursday 16 January 2025 15:00:11 +0000 (0:00:00.649) 0:01:29.563 ****** 2025-01-16 15:08:15.190066 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.190074 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.190082 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.190143 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.190154 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.190163 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.190171 | orchestrator | 2025-01-16 15:08:15.190179 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-01-16 15:08:15.190187 | orchestrator | Thursday 16 January 2025 15:00:11 +0000 (0:00:00.607) 0:01:30.170 ****** 2025-01-16 15:08:15.190195 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.190203 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.190211 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.190219 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.190226 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.190234 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.190242 | orchestrator | 2025-01-16 15:08:15.190250 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-01-16 15:08:15.190258 | orchestrator | Thursday 16 January 2025 15:00:12 +0000 (0:00:00.733) 0:01:30.904 ****** 2025-01-16 15:08:15.190266 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.190274 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.190282 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.190290 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.190302 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.190310 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.190318 | orchestrator | 2025-01-16 15:08:15.190326 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-01-16 15:08:15.190335 | orchestrator | Thursday 16 January 2025 15:00:12 +0000 (0:00:00.517) 0:01:31.421 ****** 2025-01-16 15:08:15.190343 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.190350 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.190358 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.190366 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.190374 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.190382 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.190390 | orchestrator | 2025-01-16 15:08:15.190413 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-01-16 15:08:15.190422 | orchestrator | Thursday 16 January 2025 15:00:13 +0000 (0:00:00.886) 0:01:32.307 ****** 2025-01-16 15:08:15.190430 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.190438 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.190446 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.190454 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.190462 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.190469 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.190477 | orchestrator | 2025-01-16 15:08:15.190491 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-01-16 15:08:15.190499 | orchestrator | Thursday 16 January 2025 15:00:14 +0000 (0:00:00.635) 0:01:32.943 ****** 2025-01-16 15:08:15.190508 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.190516 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.190524 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.190531 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.190539 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.190547 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.190595 | orchestrator | 2025-01-16 15:08:15.190605 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-01-16 15:08:15.190614 | orchestrator | Thursday 16 January 2025 15:00:15 +0000 (0:00:00.628) 0:01:33.571 ****** 2025-01-16 15:08:15.190622 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.190630 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.190637 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.190645 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.190654 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.190662 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.190670 | orchestrator | 2025-01-16 15:08:15.190678 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-01-16 15:08:15.190686 | orchestrator | Thursday 16 January 2025 15:00:15 +0000 (0:00:00.423) 0:01:33.995 ****** 2025-01-16 15:08:15.190694 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.190702 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.190710 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.190718 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.190726 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.190734 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.190743 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.190750 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.190758 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.190766 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.190774 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.190782 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.190790 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.190798 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.190806 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.190814 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.190822 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.190829 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.190837 | orchestrator | 2025-01-16 15:08:15.190845 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-01-16 15:08:15.190853 | orchestrator | Thursday 16 January 2025 15:00:16 +0000 (0:00:01.092) 0:01:35.087 ****** 2025-01-16 15:08:15.190861 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-01-16 15:08:15.190869 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-01-16 15:08:15.190877 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.190885 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-01-16 15:08:15.190893 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-01-16 15:08:15.190901 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.190959 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-01-16 15:08:15.190970 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-01-16 15:08:15.190978 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.190986 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-01-16 15:08:15.190993 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-01-16 15:08:15.191006 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191013 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-01-16 15:08:15.191020 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-01-16 15:08:15.191028 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191035 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-01-16 15:08:15.191042 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-01-16 15:08:15.191050 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191057 | orchestrator | 2025-01-16 15:08:15.191064 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-01-16 15:08:15.191072 | orchestrator | Thursday 16 January 2025 15:00:17 +0000 (0:00:00.966) 0:01:36.054 ****** 2025-01-16 15:08:15.191079 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191086 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.191094 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.191101 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191108 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191116 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191123 | orchestrator | 2025-01-16 15:08:15.191130 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-01-16 15:08:15.191138 | orchestrator | Thursday 16 January 2025 15:00:18 +0000 (0:00:00.630) 0:01:36.684 ****** 2025-01-16 15:08:15.191145 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191152 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.191159 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.191167 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191174 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191181 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191188 | orchestrator | 2025-01-16 15:08:15.191196 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:08:15.191203 | orchestrator | Thursday 16 January 2025 15:00:18 +0000 (0:00:00.620) 0:01:37.304 ****** 2025-01-16 15:08:15.191210 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191221 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.191229 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.191236 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191243 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191251 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191258 | orchestrator | 2025-01-16 15:08:15.191265 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:08:15.191275 | orchestrator | Thursday 16 January 2025 15:00:19 +0000 (0:00:00.986) 0:01:38.291 ****** 2025-01-16 15:08:15.191283 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191290 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.191297 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.191305 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191312 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191319 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191326 | orchestrator | 2025-01-16 15:08:15.191334 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:08:15.191341 | orchestrator | Thursday 16 January 2025 15:00:20 +0000 (0:00:01.018) 0:01:39.310 ****** 2025-01-16 15:08:15.191348 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191355 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.191363 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.191370 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191377 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191384 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191392 | orchestrator | 2025-01-16 15:08:15.191399 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:08:15.191407 | orchestrator | Thursday 16 January 2025 15:00:21 +0000 (0:00:00.953) 0:01:40.263 ****** 2025-01-16 15:08:15.191420 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191427 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.191435 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.191442 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191449 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191456 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191464 | orchestrator | 2025-01-16 15:08:15.191471 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:08:15.191478 | orchestrator | Thursday 16 January 2025 15:00:22 +0000 (0:00:00.693) 0:01:40.956 ****** 2025-01-16 15:08:15.191485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.191493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.191500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.191507 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191514 | orchestrator | 2025-01-16 15:08:15.191521 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:08:15.191529 | orchestrator | Thursday 16 January 2025 15:00:22 +0000 (0:00:00.298) 0:01:41.254 ****** 2025-01-16 15:08:15.191536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.191543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.191551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.191572 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191583 | orchestrator | 2025-01-16 15:08:15.191590 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:08:15.191598 | orchestrator | Thursday 16 January 2025 15:00:23 +0000 (0:00:00.315) 0:01:41.570 ****** 2025-01-16 15:08:15.191606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.191658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.191669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.191678 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191686 | orchestrator | 2025-01-16 15:08:15.191694 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.191703 | orchestrator | Thursday 16 January 2025 15:00:23 +0000 (0:00:00.462) 0:01:42.033 ****** 2025-01-16 15:08:15.191711 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191719 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.191727 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.191735 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191744 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191752 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191760 | orchestrator | 2025-01-16 15:08:15.191768 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:08:15.191777 | orchestrator | Thursday 16 January 2025 15:00:24 +0000 (0:00:00.615) 0:01:42.649 ****** 2025-01-16 15:08:15.191785 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.191793 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.191802 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191810 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.191818 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.191826 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.191834 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.191841 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191849 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.191856 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191864 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.191872 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191880 | orchestrator | 2025-01-16 15:08:15.191887 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:08:15.191900 | orchestrator | Thursday 16 January 2025 15:00:24 +0000 (0:00:00.696) 0:01:43.345 ****** 2025-01-16 15:08:15.191908 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191928 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.191936 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.191943 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.191951 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.191958 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.191966 | orchestrator | 2025-01-16 15:08:15.191973 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.191981 | orchestrator | Thursday 16 January 2025 15:00:25 +0000 (0:00:00.714) 0:01:44.060 ****** 2025-01-16 15:08:15.191989 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.191996 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.192004 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.192011 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.192019 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.192026 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.192034 | orchestrator | 2025-01-16 15:08:15.192042 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:08:15.192049 | orchestrator | Thursday 16 January 2025 15:00:26 +0000 (0:00:00.549) 0:01:44.610 ****** 2025-01-16 15:08:15.192057 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.192065 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.192076 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.192084 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.192091 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.192099 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.192107 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.192114 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.192125 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.192133 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.192140 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.192148 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.192156 | orchestrator | 2025-01-16 15:08:15.192163 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:08:15.192171 | orchestrator | Thursday 16 January 2025 15:00:26 +0000 (0:00:00.898) 0:01:45.508 ****** 2025-01-16 15:08:15.192179 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.192186 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.192194 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.192202 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.192210 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.192217 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.192225 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.192232 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.192240 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.192247 | orchestrator | 2025-01-16 15:08:15.192255 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:08:15.192262 | orchestrator | Thursday 16 January 2025 15:00:27 +0000 (0:00:00.496) 0:01:46.005 ****** 2025-01-16 15:08:15.192270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.192278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.192285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.192293 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.192304 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-01-16 15:08:15.192312 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-01-16 15:08:15.192364 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-01-16 15:08:15.192379 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.192391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-01-16 15:08:15.192402 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-01-16 15:08:15.192414 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-01-16 15:08:15.192425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.192432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.192439 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.192446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.192453 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-01-16 15:08:15.192460 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-01-16 15:08:15.192467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-01-16 15:08:15.192474 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.192481 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.192488 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-01-16 15:08:15.192495 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-01-16 15:08:15.192502 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-01-16 15:08:15.192509 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.192516 | orchestrator | 2025-01-16 15:08:15.192523 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-01-16 15:08:15.192530 | orchestrator | Thursday 16 January 2025 15:00:29 +0000 (0:00:01.933) 0:01:47.939 ****** 2025-01-16 15:08:15.192537 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.192544 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.192551 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.192574 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.192581 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.192588 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.192595 | orchestrator | 2025-01-16 15:08:15.192602 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-01-16 15:08:15.192609 | orchestrator | Thursday 16 January 2025 15:00:30 +0000 (0:00:01.326) 0:01:49.265 ****** 2025-01-16 15:08:15.192616 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.192624 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.192631 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-01-16 15:08:15.192637 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.192644 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-01-16 15:08:15.192651 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.192659 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.192666 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.192672 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.192679 | orchestrator | 2025-01-16 15:08:15.192687 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-01-16 15:08:15.192694 | orchestrator | Thursday 16 January 2025 15:00:31 +0000 (0:00:01.273) 0:01:50.539 ****** 2025-01-16 15:08:15.192701 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.192708 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.192715 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.192722 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.192729 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.192736 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.192743 | orchestrator | 2025-01-16 15:08:15.192750 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-01-16 15:08:15.192766 | orchestrator | Thursday 16 January 2025 15:00:33 +0000 (0:00:01.373) 0:01:51.912 ****** 2025-01-16 15:08:15.192773 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.192781 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.192788 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.192795 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.192801 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.192809 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.192816 | orchestrator | 2025-01-16 15:08:15.192823 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-01-16 15:08:15.192830 | orchestrator | Thursday 16 January 2025 15:00:34 +0000 (0:00:01.123) 0:01:53.035 ****** 2025-01-16 15:08:15.192837 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.192844 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.192850 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.192857 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.192864 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.192871 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.192878 | orchestrator | 2025-01-16 15:08:15.192885 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-01-16 15:08:15.192892 | orchestrator | Thursday 16 January 2025 15:00:36 +0000 (0:00:01.613) 0:01:54.649 ****** 2025-01-16 15:08:15.192899 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.192906 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.192913 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.192920 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.192927 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.192938 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.192945 | orchestrator | 2025-01-16 15:08:15.192952 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-01-16 15:08:15.192959 | orchestrator | Thursday 16 January 2025 15:00:37 +0000 (0:00:01.881) 0:01:56.531 ****** 2025-01-16 15:08:15.192967 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.192975 | orchestrator | 2025-01-16 15:08:15.192982 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-01-16 15:08:15.192989 | orchestrator | Thursday 16 January 2025 15:00:39 +0000 (0:00:01.015) 0:01:57.547 ****** 2025-01-16 15:08:15.192996 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.193005 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.193013 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.193071 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.193082 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.193090 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.193097 | orchestrator | 2025-01-16 15:08:15.193106 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-01-16 15:08:15.193114 | orchestrator | Thursday 16 January 2025 15:00:39 +0000 (0:00:00.476) 0:01:58.023 ****** 2025-01-16 15:08:15.193122 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.193129 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.193137 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.193146 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.193158 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.193170 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.193182 | orchestrator | 2025-01-16 15:08:15.193190 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-01-16 15:08:15.193198 | orchestrator | Thursday 16 January 2025 15:00:40 +0000 (0:00:00.772) 0:01:58.796 ****** 2025-01-16 15:08:15.193206 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-01-16 15:08:15.193214 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-01-16 15:08:15.193222 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-01-16 15:08:15.193236 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-01-16 15:08:15.193244 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-01-16 15:08:15.193252 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-01-16 15:08:15.193259 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-01-16 15:08:15.193267 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-01-16 15:08:15.193275 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-01-16 15:08:15.193283 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-01-16 15:08:15.193290 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-01-16 15:08:15.193298 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-01-16 15:08:15.193306 | orchestrator | 2025-01-16 15:08:15.193314 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-01-16 15:08:15.193340 | orchestrator | Thursday 16 January 2025 15:00:41 +0000 (0:00:01.344) 0:02:00.140 ****** 2025-01-16 15:08:15.193353 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.193365 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.193373 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.193381 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.193389 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.193396 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.193403 | orchestrator | 2025-01-16 15:08:15.193410 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-01-16 15:08:15.193417 | orchestrator | Thursday 16 January 2025 15:00:42 +0000 (0:00:01.197) 0:02:01.337 ****** 2025-01-16 15:08:15.193424 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.193431 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.193438 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.193445 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.193452 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.193459 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.193465 | orchestrator | 2025-01-16 15:08:15.193472 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-01-16 15:08:15.193479 | orchestrator | Thursday 16 January 2025 15:00:43 +0000 (0:00:00.644) 0:02:01.982 ****** 2025-01-16 15:08:15.193486 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.193493 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.193500 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.193507 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.193514 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.193521 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.193528 | orchestrator | 2025-01-16 15:08:15.193535 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-01-16 15:08:15.193546 | orchestrator | Thursday 16 January 2025 15:00:44 +0000 (0:00:00.903) 0:02:02.886 ****** 2025-01-16 15:08:15.193566 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.193578 | orchestrator | 2025-01-16 15:08:15.193590 | orchestrator | TASK [ceph-container-common : pulling nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy image] *** 2025-01-16 15:08:15.193602 | orchestrator | Thursday 16 January 2025 15:00:45 +0000 (0:00:01.035) 0:02:03.921 ****** 2025-01-16 15:08:15.193613 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.193623 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.193630 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.193637 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.193644 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.193658 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.193665 | orchestrator | 2025-01-16 15:08:15.193673 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-01-16 15:08:15.193679 | orchestrator | Thursday 16 January 2025 15:01:03 +0000 (0:00:17.639) 0:02:21.560 ****** 2025-01-16 15:08:15.193686 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-01-16 15:08:15.193693 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-01-16 15:08:15.193700 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-01-16 15:08:15.193707 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.193770 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-01-16 15:08:15.193781 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-01-16 15:08:15.193789 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-01-16 15:08:15.193796 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.193804 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-01-16 15:08:15.193811 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-01-16 15:08:15.193819 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-01-16 15:08:15.193827 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.193834 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-01-16 15:08:15.193842 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-01-16 15:08:15.193850 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-01-16 15:08:15.193857 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.193865 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-01-16 15:08:15.193872 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-01-16 15:08:15.193880 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-01-16 15:08:15.193887 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.193895 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-01-16 15:08:15.193902 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-01-16 15:08:15.193910 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-01-16 15:08:15.193917 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.193925 | orchestrator | 2025-01-16 15:08:15.193933 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-01-16 15:08:15.193940 | orchestrator | Thursday 16 January 2025 15:01:03 +0000 (0:00:00.875) 0:02:22.436 ****** 2025-01-16 15:08:15.193948 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.193955 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.193963 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.193971 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.193978 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.193986 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.193994 | orchestrator | 2025-01-16 15:08:15.194001 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-01-16 15:08:15.194009 | orchestrator | Thursday 16 January 2025 15:01:04 +0000 (0:00:00.481) 0:02:22.917 ****** 2025-01-16 15:08:15.194035 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194048 | orchestrator | 2025-01-16 15:08:15.194056 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-01-16 15:08:15.194063 | orchestrator | Thursday 16 January 2025 15:01:04 +0000 (0:00:00.106) 0:02:23.024 ****** 2025-01-16 15:08:15.194070 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194077 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194090 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194097 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194104 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194112 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194119 | orchestrator | 2025-01-16 15:08:15.194126 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-01-16 15:08:15.194133 | orchestrator | Thursday 16 January 2025 15:01:05 +0000 (0:00:00.743) 0:02:23.768 ****** 2025-01-16 15:08:15.194140 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194147 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194154 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194161 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194168 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194175 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194182 | orchestrator | 2025-01-16 15:08:15.194189 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-01-16 15:08:15.194196 | orchestrator | Thursday 16 January 2025 15:01:05 +0000 (0:00:00.773) 0:02:24.542 ****** 2025-01-16 15:08:15.194203 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194210 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194217 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194224 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194231 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194238 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194245 | orchestrator | 2025-01-16 15:08:15.194255 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-01-16 15:08:15.194263 | orchestrator | Thursday 16 January 2025 15:01:06 +0000 (0:00:00.799) 0:02:25.341 ****** 2025-01-16 15:08:15.194270 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.194277 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.194284 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.194291 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.194298 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.194305 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.194312 | orchestrator | 2025-01-16 15:08:15.194319 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-01-16 15:08:15.194326 | orchestrator | Thursday 16 January 2025 15:01:09 +0000 (0:00:02.348) 0:02:27.689 ****** 2025-01-16 15:08:15.194333 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.194340 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.194347 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.194354 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.194361 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.194368 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.194375 | orchestrator | 2025-01-16 15:08:15.194382 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-01-16 15:08:15.194390 | orchestrator | Thursday 16 January 2025 15:01:09 +0000 (0:00:00.828) 0:02:28.518 ****** 2025-01-16 15:08:15.194439 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.194452 | orchestrator | 2025-01-16 15:08:15.194460 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-01-16 15:08:15.194468 | orchestrator | Thursday 16 January 2025 15:01:11 +0000 (0:00:01.321) 0:02:29.839 ****** 2025-01-16 15:08:15.194475 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194484 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194492 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194501 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194508 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194516 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194524 | orchestrator | 2025-01-16 15:08:15.194532 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-01-16 15:08:15.194547 | orchestrator | Thursday 16 January 2025 15:01:11 +0000 (0:00:00.589) 0:02:30.429 ****** 2025-01-16 15:08:15.194594 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194603 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194611 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194618 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194626 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194634 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194641 | orchestrator | 2025-01-16 15:08:15.194649 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-01-16 15:08:15.194657 | orchestrator | Thursday 16 January 2025 15:01:12 +0000 (0:00:00.870) 0:02:31.300 ****** 2025-01-16 15:08:15.194665 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194673 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194681 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194689 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194727 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194736 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194744 | orchestrator | 2025-01-16 15:08:15.194751 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-01-16 15:08:15.194759 | orchestrator | Thursday 16 January 2025 15:01:13 +0000 (0:00:00.466) 0:02:31.766 ****** 2025-01-16 15:08:15.194767 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194774 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194781 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194792 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194799 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194806 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194813 | orchestrator | 2025-01-16 15:08:15.194820 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-01-16 15:08:15.194827 | orchestrator | Thursday 16 January 2025 15:01:14 +0000 (0:00:00.865) 0:02:32.632 ****** 2025-01-16 15:08:15.194834 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194841 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194847 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194854 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194861 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194868 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194874 | orchestrator | 2025-01-16 15:08:15.194881 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-01-16 15:08:15.194887 | orchestrator | Thursday 16 January 2025 15:01:14 +0000 (0:00:00.673) 0:02:33.305 ****** 2025-01-16 15:08:15.194893 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194900 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194906 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194912 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194918 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194924 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194931 | orchestrator | 2025-01-16 15:08:15.194937 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-01-16 15:08:15.194943 | orchestrator | Thursday 16 January 2025 15:01:15 +0000 (0:00:01.055) 0:02:34.361 ****** 2025-01-16 15:08:15.194949 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.194956 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.194962 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.194968 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.194974 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.194980 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.194986 | orchestrator | 2025-01-16 15:08:15.194992 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-01-16 15:08:15.194999 | orchestrator | Thursday 16 January 2025 15:01:16 +0000 (0:00:00.870) 0:02:35.231 ****** 2025-01-16 15:08:15.195005 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.195011 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.195022 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.195028 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.195035 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.195062 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.195069 | orchestrator | 2025-01-16 15:08:15.195075 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-01-16 15:08:15.195081 | orchestrator | Thursday 16 January 2025 15:01:18 +0000 (0:00:01.328) 0:02:36.559 ****** 2025-01-16 15:08:15.195088 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.195094 | orchestrator | 2025-01-16 15:08:15.195101 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-01-16 15:08:15.195107 | orchestrator | Thursday 16 January 2025 15:01:18 +0000 (0:00:00.942) 0:02:37.502 ****** 2025-01-16 15:08:15.195113 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-01-16 15:08:15.195119 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-01-16 15:08:15.195125 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-01-16 15:08:15.195132 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-01-16 15:08:15.195138 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-01-16 15:08:15.195144 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-01-16 15:08:15.195198 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-01-16 15:08:15.195206 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-01-16 15:08:15.195213 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-01-16 15:08:15.195219 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-01-16 15:08:15.195226 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-01-16 15:08:15.195232 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-01-16 15:08:15.195238 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-01-16 15:08:15.195244 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-01-16 15:08:15.195250 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-01-16 15:08:15.195257 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-01-16 15:08:15.195263 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-01-16 15:08:15.195269 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-01-16 15:08:15.195275 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-01-16 15:08:15.195281 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-01-16 15:08:15.195287 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-01-16 15:08:15.195294 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-01-16 15:08:15.195300 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-01-16 15:08:15.195306 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-01-16 15:08:15.195313 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-01-16 15:08:15.195319 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-01-16 15:08:15.195325 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-01-16 15:08:15.195334 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-01-16 15:08:15.195341 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-01-16 15:08:15.195347 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-01-16 15:08:15.195353 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-01-16 15:08:15.195360 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-01-16 15:08:15.195366 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-01-16 15:08:15.195372 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-01-16 15:08:15.195384 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-01-16 15:08:15.195391 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-01-16 15:08:15.195397 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-01-16 15:08:15.195403 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-01-16 15:08:15.195410 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-01-16 15:08:15.195416 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-01-16 15:08:15.195422 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-01-16 15:08:15.195428 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-01-16 15:08:15.195434 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-01-16 15:08:15.195441 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-01-16 15:08:15.195447 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-01-16 15:08:15.195453 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-01-16 15:08:15.195459 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-01-16 15:08:15.195465 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-01-16 15:08:15.195472 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-01-16 15:08:15.195478 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-01-16 15:08:15.195484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-01-16 15:08:15.195490 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-01-16 15:08:15.195497 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-01-16 15:08:15.195503 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-01-16 15:08:15.195509 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-01-16 15:08:15.195515 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-01-16 15:08:15.195521 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-01-16 15:08:15.195528 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-01-16 15:08:15.195534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-01-16 15:08:15.195541 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-01-16 15:08:15.195547 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-01-16 15:08:15.195567 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-01-16 15:08:15.195574 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-01-16 15:08:15.195580 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-01-16 15:08:15.195587 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-01-16 15:08:15.195630 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-01-16 15:08:15.195640 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-01-16 15:08:15.195647 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-01-16 15:08:15.195653 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-01-16 15:08:15.195660 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-01-16 15:08:15.195667 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-01-16 15:08:15.195673 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-01-16 15:08:15.195680 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-01-16 15:08:15.195686 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-01-16 15:08:15.195697 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-01-16 15:08:15.195704 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-01-16 15:08:15.195710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-01-16 15:08:15.195717 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-01-16 15:08:15.195724 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-01-16 15:08:15.195731 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-01-16 15:08:15.195738 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-01-16 15:08:15.195747 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-01-16 15:08:15.195754 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-01-16 15:08:15.195761 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-01-16 15:08:15.195768 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-01-16 15:08:15.195774 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-01-16 15:08:15.195781 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-01-16 15:08:15.195788 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-01-16 15:08:15.195794 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-01-16 15:08:15.195801 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-01-16 15:08:15.195808 | orchestrator | 2025-01-16 15:08:15.195814 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-01-16 15:08:15.195821 | orchestrator | Thursday 16 January 2025 15:01:23 +0000 (0:00:04.636) 0:02:42.138 ****** 2025-01-16 15:08:15.195827 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.195834 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.195840 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.195847 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.195855 | orchestrator | 2025-01-16 15:08:15.195861 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-01-16 15:08:15.195868 | orchestrator | Thursday 16 January 2025 15:01:24 +0000 (0:00:00.755) 0:02:42.893 ****** 2025-01-16 15:08:15.195874 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-01-16 15:08:15.195881 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-01-16 15:08:15.195888 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-01-16 15:08:15.195906 | orchestrator | 2025-01-16 15:08:15.195913 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-01-16 15:08:15.195920 | orchestrator | Thursday 16 January 2025 15:01:24 +0000 (0:00:00.493) 0:02:43.387 ****** 2025-01-16 15:08:15.195926 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-01-16 15:08:15.195933 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-01-16 15:08:15.195940 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-01-16 15:08:15.195946 | orchestrator | 2025-01-16 15:08:15.195953 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-01-16 15:08:15.195959 | orchestrator | Thursday 16 January 2025 15:01:25 +0000 (0:00:01.020) 0:02:44.407 ****** 2025-01-16 15:08:15.195966 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.195973 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.195979 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.196017 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196024 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196031 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196038 | orchestrator | 2025-01-16 15:08:15.196044 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-01-16 15:08:15.196051 | orchestrator | Thursday 16 January 2025 15:01:26 +0000 (0:00:00.424) 0:02:44.832 ****** 2025-01-16 15:08:15.196058 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.196064 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.196071 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.196078 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196085 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196091 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196098 | orchestrator | 2025-01-16 15:08:15.196105 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-01-16 15:08:15.196148 | orchestrator | Thursday 16 January 2025 15:01:26 +0000 (0:00:00.593) 0:02:45.425 ****** 2025-01-16 15:08:15.196157 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.196164 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.196170 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.196176 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196182 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196189 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196195 | orchestrator | 2025-01-16 15:08:15.196201 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-01-16 15:08:15.196208 | orchestrator | Thursday 16 January 2025 15:01:27 +0000 (0:00:00.497) 0:02:45.923 ****** 2025-01-16 15:08:15.196214 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.196220 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.196226 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.196236 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196243 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196250 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196256 | orchestrator | 2025-01-16 15:08:15.196263 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-01-16 15:08:15.196270 | orchestrator | Thursday 16 January 2025 15:01:28 +0000 (0:00:00.622) 0:02:46.545 ****** 2025-01-16 15:08:15.196276 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.196283 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.196290 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.196296 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196303 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196309 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196316 | orchestrator | 2025-01-16 15:08:15.196323 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-01-16 15:08:15.196330 | orchestrator | Thursday 16 January 2025 15:01:28 +0000 (0:00:00.553) 0:02:47.099 ****** 2025-01-16 15:08:15.196336 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.196346 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.196352 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.196359 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196366 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196372 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196379 | orchestrator | 2025-01-16 15:08:15.196385 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-01-16 15:08:15.196392 | orchestrator | Thursday 16 January 2025 15:01:29 +0000 (0:00:00.636) 0:02:47.735 ****** 2025-01-16 15:08:15.196399 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.196406 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.196412 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.196419 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196425 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196436 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196443 | orchestrator | 2025-01-16 15:08:15.196450 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-01-16 15:08:15.196456 | orchestrator | Thursday 16 January 2025 15:01:29 +0000 (0:00:00.477) 0:02:48.213 ****** 2025-01-16 15:08:15.196463 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.196470 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.196477 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.196483 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196490 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196496 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196503 | orchestrator | 2025-01-16 15:08:15.196509 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-01-16 15:08:15.196516 | orchestrator | Thursday 16 January 2025 15:01:30 +0000 (0:00:00.603) 0:02:48.817 ****** 2025-01-16 15:08:15.196523 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196529 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196536 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196543 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.196550 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.196595 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.196602 | orchestrator | 2025-01-16 15:08:15.196608 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-01-16 15:08:15.196615 | orchestrator | Thursday 16 January 2025 15:01:31 +0000 (0:00:01.272) 0:02:50.090 ****** 2025-01-16 15:08:15.196621 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.196627 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.196633 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.196640 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196646 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196652 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196658 | orchestrator | 2025-01-16 15:08:15.196664 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-01-16 15:08:15.196671 | orchestrator | Thursday 16 January 2025 15:01:32 +0000 (0:00:00.565) 0:02:50.655 ****** 2025-01-16 15:08:15.196677 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.196684 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.196690 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.196697 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.196703 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.196709 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.196715 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.196722 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.196728 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.196734 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.196740 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.196746 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196753 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.196759 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.196765 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196771 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.196778 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.196784 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196791 | orchestrator | 2025-01-16 15:08:15.196841 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-01-16 15:08:15.196850 | orchestrator | Thursday 16 January 2025 15:01:32 +0000 (0:00:00.672) 0:02:51.327 ****** 2025-01-16 15:08:15.196857 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-01-16 15:08:15.196864 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-01-16 15:08:15.196883 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-01-16 15:08:15.196890 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-01-16 15:08:15.196898 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-01-16 15:08:15.196905 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-01-16 15:08:15.196912 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-01-16 15:08:15.196919 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-01-16 15:08:15.196925 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.196933 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-01-16 15:08:15.196939 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-01-16 15:08:15.196946 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.196953 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-01-16 15:08:15.196960 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-01-16 15:08:15.196967 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.196974 | orchestrator | 2025-01-16 15:08:15.196981 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-01-16 15:08:15.196988 | orchestrator | Thursday 16 January 2025 15:01:33 +0000 (0:00:00.558) 0:02:51.885 ****** 2025-01-16 15:08:15.196995 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.197002 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.197009 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.197017 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197028 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197035 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197043 | orchestrator | 2025-01-16 15:08:15.197050 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-01-16 15:08:15.197057 | orchestrator | Thursday 16 January 2025 15:01:33 +0000 (0:00:00.630) 0:02:52.516 ****** 2025-01-16 15:08:15.197064 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197071 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.197078 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.197085 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197092 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197099 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197106 | orchestrator | 2025-01-16 15:08:15.197113 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:08:15.197120 | orchestrator | Thursday 16 January 2025 15:01:34 +0000 (0:00:00.435) 0:02:52.951 ****** 2025-01-16 15:08:15.197127 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197133 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.197140 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.197147 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197153 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197160 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197166 | orchestrator | 2025-01-16 15:08:15.197176 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:08:15.197193 | orchestrator | Thursday 16 January 2025 15:01:34 +0000 (0:00:00.446) 0:02:53.398 ****** 2025-01-16 15:08:15.197199 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197206 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.197212 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.197217 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197223 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197229 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197235 | orchestrator | 2025-01-16 15:08:15.197241 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:08:15.197247 | orchestrator | Thursday 16 January 2025 15:01:35 +0000 (0:00:00.639) 0:02:54.037 ****** 2025-01-16 15:08:15.197253 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197263 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.197269 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.197275 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197281 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197287 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197292 | orchestrator | 2025-01-16 15:08:15.197299 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:08:15.197305 | orchestrator | Thursday 16 January 2025 15:01:35 +0000 (0:00:00.421) 0:02:54.458 ****** 2025-01-16 15:08:15.197311 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.197317 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.197323 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.197329 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197334 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197340 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197346 | orchestrator | 2025-01-16 15:08:15.197352 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:08:15.197358 | orchestrator | Thursday 16 January 2025 15:01:36 +0000 (0:00:00.861) 0:02:55.319 ****** 2025-01-16 15:08:15.197364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.197370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.197376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.197382 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197388 | orchestrator | 2025-01-16 15:08:15.197394 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:08:15.197400 | orchestrator | Thursday 16 January 2025 15:01:37 +0000 (0:00:00.298) 0:02:55.618 ****** 2025-01-16 15:08:15.197406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.197412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.197455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.197464 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197470 | orchestrator | 2025-01-16 15:08:15.197477 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:08:15.197483 | orchestrator | Thursday 16 January 2025 15:01:37 +0000 (0:00:00.296) 0:02:55.914 ****** 2025-01-16 15:08:15.197490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.197496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.197502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.197509 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197515 | orchestrator | 2025-01-16 15:08:15.197522 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.197528 | orchestrator | Thursday 16 January 2025 15:01:37 +0000 (0:00:00.319) 0:02:56.233 ****** 2025-01-16 15:08:15.197534 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.197541 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.197547 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.197566 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197575 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197585 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197595 | orchestrator | 2025-01-16 15:08:15.197603 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:08:15.197614 | orchestrator | Thursday 16 January 2025 15:01:38 +0000 (0:00:00.664) 0:02:56.898 ****** 2025-01-16 15:08:15.197620 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-01-16 15:08:15.197626 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-01-16 15:08:15.197632 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.197638 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197644 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.197650 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197655 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-01-16 15:08:15.197666 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.197672 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197678 | orchestrator | 2025-01-16 15:08:15.197684 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:08:15.197689 | orchestrator | Thursday 16 January 2025 15:01:39 +0000 (0:00:00.743) 0:02:57.642 ****** 2025-01-16 15:08:15.197695 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197701 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.197707 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.197716 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197722 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197728 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197734 | orchestrator | 2025-01-16 15:08:15.197740 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.197746 | orchestrator | Thursday 16 January 2025 15:01:39 +0000 (0:00:00.556) 0:02:58.199 ****** 2025-01-16 15:08:15.197751 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197757 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.197763 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.197769 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197774 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197780 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197786 | orchestrator | 2025-01-16 15:08:15.197792 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:08:15.197798 | orchestrator | Thursday 16 January 2025 15:01:40 +0000 (0:00:00.624) 0:02:58.823 ****** 2025-01-16 15:08:15.197804 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.197810 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.197815 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197821 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.197827 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.197833 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197839 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.197845 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197851 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.197857 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.197862 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.197868 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197874 | orchestrator | 2025-01-16 15:08:15.197880 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:08:15.197886 | orchestrator | Thursday 16 January 2025 15:01:41 +0000 (0:00:01.128) 0:02:59.952 ****** 2025-01-16 15:08:15.197892 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.197898 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.197904 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.197910 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.197915 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.197921 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.197927 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.197933 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.197939 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.197945 | orchestrator | 2025-01-16 15:08:15.197950 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:08:15.197956 | orchestrator | Thursday 16 January 2025 15:01:42 +0000 (0:00:00.618) 0:03:00.570 ****** 2025-01-16 15:08:15.197962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.197971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.197977 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-01-16 15:08:15.198044 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-01-16 15:08:15.198054 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-01-16 15:08:15.198061 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-01-16 15:08:15.198067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.198073 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.198080 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-01-16 15:08:15.198086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.198093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.198099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.198105 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-01-16 15:08:15.198112 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198118 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-01-16 15:08:15.198128 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-01-16 15:08:15.198134 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-01-16 15:08:15.198141 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.198147 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.198154 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.198160 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-01-16 15:08:15.198166 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-01-16 15:08:15.198173 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-01-16 15:08:15.198179 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.198185 | orchestrator | 2025-01-16 15:08:15.198192 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-01-16 15:08:15.198198 | orchestrator | Thursday 16 January 2025 15:01:43 +0000 (0:00:01.385) 0:03:01.956 ****** 2025-01-16 15:08:15.198204 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.198211 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.198217 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.198224 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.198230 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.198236 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.198243 | orchestrator | 2025-01-16 15:08:15.198249 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-01-16 15:08:15.198256 | orchestrator | Thursday 16 January 2025 15:01:46 +0000 (0:00:03.358) 0:03:05.314 ****** 2025-01-16 15:08:15.198262 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.198268 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.198275 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.198281 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.198287 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.198303 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.198310 | orchestrator | 2025-01-16 15:08:15.198317 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-01-16 15:08:15.198323 | orchestrator | Thursday 16 January 2025 15:01:47 +0000 (0:00:00.950) 0:03:06.265 ****** 2025-01-16 15:08:15.198330 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198336 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.198342 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.198349 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.198355 | orchestrator | 2025-01-16 15:08:15.198365 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-01-16 15:08:15.198372 | orchestrator | Thursday 16 January 2025 15:01:48 +0000 (0:00:00.789) 0:03:07.054 ****** 2025-01-16 15:08:15.198382 | orchestrator | 2025-01-16 15:08:15.198389 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-01-16 15:08:15.198395 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.198401 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.198408 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.198414 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.198421 | orchestrator | 2025-01-16 15:08:15.198427 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-01-16 15:08:15.198433 | orchestrator | Thursday 16 January 2025 15:01:49 +0000 (0:00:00.615) 0:03:07.670 ****** 2025-01-16 15:08:15.198439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.198446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.198452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.198458 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198465 | orchestrator | 2025-01-16 15:08:15.198471 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-01-16 15:08:15.198477 | orchestrator | Thursday 16 January 2025 15:01:49 +0000 (0:00:00.465) 0:03:08.135 ****** 2025-01-16 15:08:15.198484 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198490 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.198497 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.198503 | orchestrator | 2025-01-16 15:08:15.198509 | orchestrator | TASK [ceph-handler : set _osd_handler_called before restart] ******************* 2025-01-16 15:08:15.198515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:08:15.198522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:08:15.198528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:08:15.198534 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.198541 | orchestrator | 2025-01-16 15:08:15.198547 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-01-16 15:08:15.198567 | orchestrator | Thursday 16 January 2025 15:01:50 +0000 (0:00:00.794) 0:03:08.930 ****** 2025-01-16 15:08:15.198576 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198590 | orchestrator | 2025-01-16 15:08:15.198598 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-01-16 15:08:15.198646 | orchestrator | Thursday 16 January 2025 15:01:50 +0000 (0:00:00.167) 0:03:09.098 ****** 2025-01-16 15:08:15.198654 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198661 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.198667 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.198673 | orchestrator | 2025-01-16 15:08:15.198679 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-01-16 15:08:15.198684 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.198690 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.198696 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.198702 | orchestrator | 2025-01-16 15:08:15.198708 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-01-16 15:08:15.198714 | orchestrator | Thursday 16 January 2025 15:01:51 +0000 (0:00:00.543) 0:03:09.642 ****** 2025-01-16 15:08:15.198720 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198726 | orchestrator | 2025-01-16 15:08:15.198732 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-01-16 15:08:15.198737 | orchestrator | Thursday 16 January 2025 15:01:51 +0000 (0:00:00.302) 0:03:09.944 ****** 2025-01-16 15:08:15.198743 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198749 | orchestrator | 2025-01-16 15:08:15.198755 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-01-16 15:08:15.198761 | orchestrator | Thursday 16 January 2025 15:01:51 +0000 (0:00:00.167) 0:03:10.112 ****** 2025-01-16 15:08:15.198767 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198778 | orchestrator | 2025-01-16 15:08:15.198784 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-01-16 15:08:15.198790 | orchestrator | Thursday 16 January 2025 15:01:51 +0000 (0:00:00.080) 0:03:10.192 ****** 2025-01-16 15:08:15.198795 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198801 | orchestrator | 2025-01-16 15:08:15.198807 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-01-16 15:08:15.198813 | orchestrator | Thursday 16 January 2025 15:01:51 +0000 (0:00:00.159) 0:03:10.352 ****** 2025-01-16 15:08:15.198819 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198825 | orchestrator | 2025-01-16 15:08:15.198831 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-01-16 15:08:15.198836 | orchestrator | Thursday 16 January 2025 15:01:51 +0000 (0:00:00.164) 0:03:10.516 ****** 2025-01-16 15:08:15.198842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.198848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.198854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.198860 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198866 | orchestrator | 2025-01-16 15:08:15.198872 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-01-16 15:08:15.198878 | orchestrator | Thursday 16 January 2025 15:01:52 +0000 (0:00:00.307) 0:03:10.824 ****** 2025-01-16 15:08:15.198883 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198889 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.198895 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.198901 | orchestrator | 2025-01-16 15:08:15.198907 | orchestrator | TASK [ceph-handler : set _osd_handler_called after restart] ******************** 2025-01-16 15:08:15.198913 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.198919 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.198924 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.198930 | orchestrator | 2025-01-16 15:08:15.198936 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-01-16 15:08:15.198942 | orchestrator | Thursday 16 January 2025 15:01:52 +0000 (0:00:00.433) 0:03:11.257 ****** 2025-01-16 15:08:15.198948 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198953 | orchestrator | 2025-01-16 15:08:15.198959 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-01-16 15:08:15.198965 | orchestrator | Thursday 16 January 2025 15:01:53 +0000 (0:00:00.419) 0:03:11.676 ****** 2025-01-16 15:08:15.198971 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.198977 | orchestrator | 2025-01-16 15:08:15.198986 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-01-16 15:08:15.198992 | orchestrator | Thursday 16 January 2025 15:01:53 +0000 (0:00:00.164) 0:03:11.840 ****** 2025-01-16 15:08:15.198998 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.199004 | orchestrator | 2025-01-16 15:08:15.199009 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-01-16 15:08:15.199015 | orchestrator | Thursday 16 January 2025 15:01:53 +0000 (0:00:00.529) 0:03:12.369 ****** 2025-01-16 15:08:15.199021 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.199027 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.199033 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.199038 | orchestrator | 2025-01-16 15:08:15.199044 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-01-16 15:08:15.199050 | orchestrator | Thursday 16 January 2025 15:01:54 +0000 (0:00:00.788) 0:03:13.158 ****** 2025-01-16 15:08:15.199056 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.199062 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.199068 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.199074 | orchestrator | 2025-01-16 15:08:15.199080 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-01-16 15:08:15.199089 | orchestrator | Thursday 16 January 2025 15:01:55 +0000 (0:00:00.544) 0:03:13.703 ****** 2025-01-16 15:08:15.199095 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.199101 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.199107 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.199113 | orchestrator | 2025-01-16 15:08:15.199119 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-01-16 15:08:15.199124 | orchestrator | Thursday 16 January 2025 15:01:55 +0000 (0:00:00.479) 0:03:14.183 ****** 2025-01-16 15:08:15.199130 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.199136 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.199142 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.199148 | orchestrator | 2025-01-16 15:08:15.199154 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-01-16 15:08:15.199193 | orchestrator | Thursday 16 January 2025 15:01:56 +0000 (0:00:00.586) 0:03:14.769 ****** 2025-01-16 15:08:15.199201 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.199208 | orchestrator | 2025-01-16 15:08:15.199213 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-01-16 15:08:15.199219 | orchestrator | Thursday 16 January 2025 15:01:56 +0000 (0:00:00.758) 0:03:15.527 ****** 2025-01-16 15:08:15.199225 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.199231 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.199237 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.199243 | orchestrator | 2025-01-16 15:08:15.199249 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-01-16 15:08:15.199255 | orchestrator | Thursday 16 January 2025 15:01:57 +0000 (0:00:00.403) 0:03:15.931 ****** 2025-01-16 15:08:15.199261 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.199267 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.199273 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.199279 | orchestrator | 2025-01-16 15:08:15.199285 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-01-16 15:08:15.199291 | orchestrator | Thursday 16 January 2025 15:01:58 +0000 (0:00:01.058) 0:03:16.990 ****** 2025-01-16 15:08:15.199297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:08:15.199303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:08:15.199309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:08:15.199315 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.199320 | orchestrator | 2025-01-16 15:08:15.199327 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-01-16 15:08:15.199332 | orchestrator | Thursday 16 January 2025 15:01:59 +0000 (0:00:00.644) 0:03:17.634 ****** 2025-01-16 15:08:15.199338 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.199344 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.199350 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.199356 | orchestrator | 2025-01-16 15:08:15.199362 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-01-16 15:08:15.199368 | orchestrator | Thursday 16 January 2025 15:01:59 +0000 (0:00:00.518) 0:03:18.152 ****** 2025-01-16 15:08:15.199374 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.199380 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.199386 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.199392 | orchestrator | 2025-01-16 15:08:15.199398 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-01-16 15:08:15.199404 | orchestrator | Thursday 16 January 2025 15:01:59 +0000 (0:00:00.266) 0:03:18.418 ****** 2025-01-16 15:08:15.199409 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.199415 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.199421 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.199427 | orchestrator | 2025-01-16 15:08:15.199433 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-01-16 15:08:15.199454 | orchestrator | Thursday 16 January 2025 15:02:00 +0000 (0:00:00.882) 0:03:19.300 ****** 2025-01-16 15:08:15.199461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.199467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.199474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.199480 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.199486 | orchestrator | 2025-01-16 15:08:15.199492 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-01-16 15:08:15.199498 | orchestrator | Thursday 16 January 2025 15:02:01 +0000 (0:00:00.720) 0:03:20.021 ****** 2025-01-16 15:08:15.199505 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.199511 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.199523 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.199529 | orchestrator | 2025-01-16 15:08:15.199535 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-01-16 15:08:15.199542 | orchestrator | Thursday 16 January 2025 15:02:01 +0000 (0:00:00.293) 0:03:20.314 ****** 2025-01-16 15:08:15.199548 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.199591 | orchestrator | 2025-01-16 15:08:15.199598 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-01-16 15:08:15.199607 | orchestrator | Thursday 16 January 2025 15:02:02 +0000 (0:00:00.677) 0:03:20.992 ****** 2025-01-16 15:08:15.199613 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.199619 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.199625 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.199631 | orchestrator | 2025-01-16 15:08:15.199637 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-01-16 15:08:15.199643 | orchestrator | Thursday 16 January 2025 15:02:02 +0000 (0:00:00.311) 0:03:21.303 ****** 2025-01-16 15:08:15.199648 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.199654 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.199660 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.199666 | orchestrator | 2025-01-16 15:08:15.199672 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-01-16 15:08:15.199678 | orchestrator | Thursday 16 January 2025 15:02:03 +0000 (0:00:00.985) 0:03:22.289 ****** 2025-01-16 15:08:15.199684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.199690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.199696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.199702 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.199708 | orchestrator | 2025-01-16 15:08:15.199714 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-01-16 15:08:15.199719 | orchestrator | Thursday 16 January 2025 15:02:04 +0000 (0:00:00.727) 0:03:23.017 ****** 2025-01-16 15:08:15.199725 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.199731 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.199737 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.199743 | orchestrator | 2025-01-16 15:08:15.199787 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-01-16 15:08:15.199795 | orchestrator | Thursday 16 January 2025 15:02:04 +0000 (0:00:00.260) 0:03:23.277 ****** 2025-01-16 15:08:15.199802 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.199807 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.199813 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.199819 | orchestrator | 2025-01-16 15:08:15.199825 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-01-16 15:08:15.199831 | orchestrator | Thursday 16 January 2025 15:02:05 +0000 (0:00:00.314) 0:03:23.592 ****** 2025-01-16 15:08:15.199837 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.199843 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.199853 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.199859 | orchestrator | 2025-01-16 15:08:15.199865 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-01-16 15:08:15.199871 | orchestrator | Thursday 16 January 2025 15:02:05 +0000 (0:00:00.258) 0:03:23.851 ****** 2025-01-16 15:08:15.199877 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.199883 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.199888 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.199894 | orchestrator | 2025-01-16 15:08:15.199900 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-01-16 15:08:15.199906 | orchestrator | Thursday 16 January 2025 15:02:05 +0000 (0:00:00.357) 0:03:24.209 ****** 2025-01-16 15:08:15.199912 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.199918 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.199923 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.199928 | orchestrator | 2025-01-16 15:08:15.199934 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-01-16 15:08:15.199939 | orchestrator | 2025-01-16 15:08:15.199944 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-01-16 15:08:15.199950 | orchestrator | Thursday 16 January 2025 15:02:07 +0000 (0:00:01.508) 0:03:25.717 ****** 2025-01-16 15:08:15.199956 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.199961 | orchestrator | 2025-01-16 15:08:15.199966 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-01-16 15:08:15.199971 | orchestrator | Thursday 16 January 2025 15:02:07 +0000 (0:00:00.499) 0:03:26.217 ****** 2025-01-16 15:08:15.199977 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.199982 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.199987 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.199993 | orchestrator | 2025-01-16 15:08:15.199998 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-01-16 15:08:15.200003 | orchestrator | Thursday 16 January 2025 15:02:08 +0000 (0:00:00.467) 0:03:26.685 ****** 2025-01-16 15:08:15.200011 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200017 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200022 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200027 | orchestrator | 2025-01-16 15:08:15.200033 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-01-16 15:08:15.200038 | orchestrator | Thursday 16 January 2025 15:02:08 +0000 (0:00:00.236) 0:03:26.921 ****** 2025-01-16 15:08:15.200043 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200049 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200054 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200059 | orchestrator | 2025-01-16 15:08:15.200064 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-01-16 15:08:15.200070 | orchestrator | Thursday 16 January 2025 15:02:08 +0000 (0:00:00.232) 0:03:27.154 ****** 2025-01-16 15:08:15.200075 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200080 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200086 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200091 | orchestrator | 2025-01-16 15:08:15.200096 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-01-16 15:08:15.200101 | orchestrator | Thursday 16 January 2025 15:02:09 +0000 (0:00:00.408) 0:03:27.562 ****** 2025-01-16 15:08:15.200107 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.200112 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.200117 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.200122 | orchestrator | 2025-01-16 15:08:15.200128 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-01-16 15:08:15.200133 | orchestrator | Thursday 16 January 2025 15:02:09 +0000 (0:00:00.537) 0:03:28.099 ****** 2025-01-16 15:08:15.200138 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200144 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200152 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200158 | orchestrator | 2025-01-16 15:08:15.200163 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-01-16 15:08:15.200168 | orchestrator | Thursday 16 January 2025 15:02:09 +0000 (0:00:00.261) 0:03:28.361 ****** 2025-01-16 15:08:15.200174 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200179 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200184 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200190 | orchestrator | 2025-01-16 15:08:15.200198 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-01-16 15:08:15.200203 | orchestrator | Thursday 16 January 2025 15:02:10 +0000 (0:00:00.323) 0:03:28.684 ****** 2025-01-16 15:08:15.200208 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200214 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200219 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200224 | orchestrator | 2025-01-16 15:08:15.200229 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-01-16 15:08:15.200235 | orchestrator | Thursday 16 January 2025 15:02:10 +0000 (0:00:00.439) 0:03:29.124 ****** 2025-01-16 15:08:15.200240 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200245 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200251 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200256 | orchestrator | 2025-01-16 15:08:15.200261 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-01-16 15:08:15.200267 | orchestrator | Thursday 16 January 2025 15:02:10 +0000 (0:00:00.291) 0:03:29.416 ****** 2025-01-16 15:08:15.200301 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200309 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200319 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200326 | orchestrator | 2025-01-16 15:08:15.200332 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-01-16 15:08:15.200338 | orchestrator | Thursday 16 January 2025 15:02:11 +0000 (0:00:00.234) 0:03:29.651 ****** 2025-01-16 15:08:15.200343 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.200349 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.200355 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.200361 | orchestrator | 2025-01-16 15:08:15.200367 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-01-16 15:08:15.200373 | orchestrator | Thursday 16 January 2025 15:02:11 +0000 (0:00:00.524) 0:03:30.175 ****** 2025-01-16 15:08:15.200378 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200384 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200390 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200396 | orchestrator | 2025-01-16 15:08:15.200401 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-01-16 15:08:15.200407 | orchestrator | Thursday 16 January 2025 15:02:11 +0000 (0:00:00.348) 0:03:30.524 ****** 2025-01-16 15:08:15.200413 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.200419 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.200424 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.200430 | orchestrator | 2025-01-16 15:08:15.200436 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-01-16 15:08:15.200441 | orchestrator | Thursday 16 January 2025 15:02:12 +0000 (0:00:00.285) 0:03:30.810 ****** 2025-01-16 15:08:15.200447 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200453 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200459 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200464 | orchestrator | 2025-01-16 15:08:15.200470 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-01-16 15:08:15.200476 | orchestrator | Thursday 16 January 2025 15:02:12 +0000 (0:00:00.215) 0:03:31.025 ****** 2025-01-16 15:08:15.200482 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200487 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200493 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200502 | orchestrator | 2025-01-16 15:08:15.200508 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-01-16 15:08:15.200514 | orchestrator | Thursday 16 January 2025 15:02:12 +0000 (0:00:00.225) 0:03:31.251 ****** 2025-01-16 15:08:15.200520 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200526 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200531 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200537 | orchestrator | 2025-01-16 15:08:15.200543 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-01-16 15:08:15.200548 | orchestrator | Thursday 16 January 2025 15:02:13 +0000 (0:00:00.375) 0:03:31.627 ****** 2025-01-16 15:08:15.200583 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200593 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200603 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200611 | orchestrator | 2025-01-16 15:08:15.200616 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-01-16 15:08:15.200621 | orchestrator | Thursday 16 January 2025 15:02:13 +0000 (0:00:00.232) 0:03:31.859 ****** 2025-01-16 15:08:15.200626 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200632 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200637 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200642 | orchestrator | 2025-01-16 15:08:15.200648 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-01-16 15:08:15.200653 | orchestrator | Thursday 16 January 2025 15:02:13 +0000 (0:00:00.222) 0:03:32.082 ****** 2025-01-16 15:08:15.200658 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.200663 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.200669 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.200674 | orchestrator | 2025-01-16 15:08:15.200680 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-01-16 15:08:15.200685 | orchestrator | Thursday 16 January 2025 15:02:13 +0000 (0:00:00.230) 0:03:32.312 ****** 2025-01-16 15:08:15.200690 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.200696 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.200701 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.200706 | orchestrator | 2025-01-16 15:08:15.200712 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-01-16 15:08:15.200717 | orchestrator | Thursday 16 January 2025 15:02:14 +0000 (0:00:00.367) 0:03:32.680 ****** 2025-01-16 15:08:15.200722 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200728 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200733 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200738 | orchestrator | 2025-01-16 15:08:15.200744 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-01-16 15:08:15.200749 | orchestrator | Thursday 16 January 2025 15:02:14 +0000 (0:00:00.231) 0:03:32.911 ****** 2025-01-16 15:08:15.200754 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200759 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200765 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200770 | orchestrator | 2025-01-16 15:08:15.200775 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-01-16 15:08:15.200781 | orchestrator | Thursday 16 January 2025 15:02:14 +0000 (0:00:00.238) 0:03:33.150 ****** 2025-01-16 15:08:15.200786 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200791 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200796 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200802 | orchestrator | 2025-01-16 15:08:15.200810 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-01-16 15:08:15.200816 | orchestrator | Thursday 16 January 2025 15:02:14 +0000 (0:00:00.223) 0:03:33.373 ****** 2025-01-16 15:08:15.200821 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200826 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200832 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200837 | orchestrator | 2025-01-16 15:08:15.200842 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-01-16 15:08:15.200852 | orchestrator | Thursday 16 January 2025 15:02:15 +0000 (0:00:00.358) 0:03:33.731 ****** 2025-01-16 15:08:15.200857 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200899 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200906 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200913 | orchestrator | 2025-01-16 15:08:15.200918 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-01-16 15:08:15.200924 | orchestrator | Thursday 16 January 2025 15:02:15 +0000 (0:00:00.252) 0:03:33.984 ****** 2025-01-16 15:08:15.200930 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200936 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200941 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200947 | orchestrator | 2025-01-16 15:08:15.200953 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-01-16 15:08:15.200959 | orchestrator | Thursday 16 January 2025 15:02:15 +0000 (0:00:00.236) 0:03:34.221 ****** 2025-01-16 15:08:15.200965 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.200973 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.200979 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.200985 | orchestrator | 2025-01-16 15:08:15.200991 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-01-16 15:08:15.200997 | orchestrator | Thursday 16 January 2025 15:02:15 +0000 (0:00:00.254) 0:03:34.475 ****** 2025-01-16 15:08:15.201003 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201008 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201014 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201020 | orchestrator | 2025-01-16 15:08:15.201026 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-01-16 15:08:15.201032 | orchestrator | Thursday 16 January 2025 15:02:16 +0000 (0:00:00.429) 0:03:34.905 ****** 2025-01-16 15:08:15.201038 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201044 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201049 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201055 | orchestrator | 2025-01-16 15:08:15.201061 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-01-16 15:08:15.201067 | orchestrator | Thursday 16 January 2025 15:02:16 +0000 (0:00:00.274) 0:03:35.180 ****** 2025-01-16 15:08:15.201072 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201078 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201084 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201090 | orchestrator | 2025-01-16 15:08:15.201096 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-01-16 15:08:15.201102 | orchestrator | Thursday 16 January 2025 15:02:16 +0000 (0:00:00.240) 0:03:35.420 ****** 2025-01-16 15:08:15.201107 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201113 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201119 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201124 | orchestrator | 2025-01-16 15:08:15.201130 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-01-16 15:08:15.201136 | orchestrator | Thursday 16 January 2025 15:02:17 +0000 (0:00:00.254) 0:03:35.674 ****** 2025-01-16 15:08:15.201142 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201147 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201153 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201159 | orchestrator | 2025-01-16 15:08:15.201164 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-01-16 15:08:15.201170 | orchestrator | Thursday 16 January 2025 15:02:17 +0000 (0:00:00.405) 0:03:36.080 ****** 2025-01-16 15:08:15.201176 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.201182 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.201188 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201198 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.201204 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.201209 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201215 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.201221 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.201226 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201232 | orchestrator | 2025-01-16 15:08:15.201238 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-01-16 15:08:15.201244 | orchestrator | Thursday 16 January 2025 15:02:17 +0000 (0:00:00.281) 0:03:36.362 ****** 2025-01-16 15:08:15.201249 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-01-16 15:08:15.201255 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-01-16 15:08:15.201261 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201267 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-01-16 15:08:15.201272 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-01-16 15:08:15.201278 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201284 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-01-16 15:08:15.201290 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-01-16 15:08:15.201295 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201301 | orchestrator | 2025-01-16 15:08:15.201307 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-01-16 15:08:15.201313 | orchestrator | Thursday 16 January 2025 15:02:18 +0000 (0:00:00.245) 0:03:36.607 ****** 2025-01-16 15:08:15.201318 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201324 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201330 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201336 | orchestrator | 2025-01-16 15:08:15.201341 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-01-16 15:08:15.201347 | orchestrator | Thursday 16 January 2025 15:02:18 +0000 (0:00:00.224) 0:03:36.831 ****** 2025-01-16 15:08:15.201353 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201359 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201364 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201370 | orchestrator | 2025-01-16 15:08:15.201376 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:08:15.201411 | orchestrator | Thursday 16 January 2025 15:02:18 +0000 (0:00:00.422) 0:03:37.254 ****** 2025-01-16 15:08:15.201419 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201425 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201432 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201437 | orchestrator | 2025-01-16 15:08:15.201443 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:08:15.201449 | orchestrator | Thursday 16 January 2025 15:02:18 +0000 (0:00:00.244) 0:03:37.499 ****** 2025-01-16 15:08:15.201455 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201461 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201466 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201472 | orchestrator | 2025-01-16 15:08:15.201478 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:08:15.201484 | orchestrator | Thursday 16 January 2025 15:02:19 +0000 (0:00:00.280) 0:03:37.779 ****** 2025-01-16 15:08:15.201490 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201495 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201501 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201507 | orchestrator | 2025-01-16 15:08:15.201513 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:08:15.201519 | orchestrator | Thursday 16 January 2025 15:02:19 +0000 (0:00:00.371) 0:03:38.150 ****** 2025-01-16 15:08:15.201525 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201534 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201540 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201546 | orchestrator | 2025-01-16 15:08:15.201566 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:08:15.201576 | orchestrator | Thursday 16 January 2025 15:02:20 +0000 (0:00:00.590) 0:03:38.741 ****** 2025-01-16 15:08:15.201582 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.201587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.201592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.201598 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201603 | orchestrator | 2025-01-16 15:08:15.201609 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:08:15.201617 | orchestrator | Thursday 16 January 2025 15:02:20 +0000 (0:00:00.420) 0:03:39.162 ****** 2025-01-16 15:08:15.201622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.201628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.201633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.201639 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201644 | orchestrator | 2025-01-16 15:08:15.201649 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:08:15.201655 | orchestrator | Thursday 16 January 2025 15:02:20 +0000 (0:00:00.339) 0:03:39.501 ****** 2025-01-16 15:08:15.201669 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.201677 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.201683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.201688 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201694 | orchestrator | 2025-01-16 15:08:15.201699 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.201704 | orchestrator | Thursday 16 January 2025 15:02:21 +0000 (0:00:00.357) 0:03:39.859 ****** 2025-01-16 15:08:15.201710 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201715 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201720 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201726 | orchestrator | 2025-01-16 15:08:15.201731 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:08:15.201736 | orchestrator | Thursday 16 January 2025 15:02:21 +0000 (0:00:00.308) 0:03:40.167 ****** 2025-01-16 15:08:15.201742 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.201747 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201752 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.201757 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201763 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.201768 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201773 | orchestrator | 2025-01-16 15:08:15.201779 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:08:15.201784 | orchestrator | Thursday 16 January 2025 15:02:22 +0000 (0:00:00.665) 0:03:40.833 ****** 2025-01-16 15:08:15.201789 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201801 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201810 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201817 | orchestrator | 2025-01-16 15:08:15.201824 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.201832 | orchestrator | Thursday 16 January 2025 15:02:22 +0000 (0:00:00.262) 0:03:41.095 ****** 2025-01-16 15:08:15.201839 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201847 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201855 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201865 | orchestrator | 2025-01-16 15:08:15.201873 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:08:15.201887 | orchestrator | Thursday 16 January 2025 15:02:22 +0000 (0:00:00.277) 0:03:41.372 ****** 2025-01-16 15:08:15.201895 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.201903 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201911 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.201919 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.201927 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.201936 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.201944 | orchestrator | 2025-01-16 15:08:15.201952 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:08:15.201961 | orchestrator | Thursday 16 January 2025 15:02:23 +0000 (0:00:00.532) 0:03:41.905 ****** 2025-01-16 15:08:15.201970 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.201978 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.202044 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.202058 | orchestrator | 2025-01-16 15:08:15.202066 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:08:15.202075 | orchestrator | Thursday 16 January 2025 15:02:23 +0000 (0:00:00.482) 0:03:42.387 ****** 2025-01-16 15:08:15.202085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.202095 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.202104 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.202113 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.202123 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-01-16 15:08:15.202132 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-01-16 15:08:15.202141 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-01-16 15:08:15.202149 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.202157 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-01-16 15:08:15.202166 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-01-16 15:08:15.202174 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-01-16 15:08:15.202183 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.202191 | orchestrator | 2025-01-16 15:08:15.202199 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-01-16 15:08:15.202208 | orchestrator | Thursday 16 January 2025 15:02:24 +0000 (0:00:00.567) 0:03:42.955 ****** 2025-01-16 15:08:15.202216 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.202225 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.202234 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.202243 | orchestrator | 2025-01-16 15:08:15.202252 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-01-16 15:08:15.202260 | orchestrator | Thursday 16 January 2025 15:02:24 +0000 (0:00:00.570) 0:03:43.526 ****** 2025-01-16 15:08:15.202268 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.202276 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.202284 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.202293 | orchestrator | 2025-01-16 15:08:15.202302 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-01-16 15:08:15.202311 | orchestrator | Thursday 16 January 2025 15:02:25 +0000 (0:00:00.641) 0:03:44.167 ****** 2025-01-16 15:08:15.202319 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.202327 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.202336 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.202344 | orchestrator | 2025-01-16 15:08:15.202352 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-01-16 15:08:15.202360 | orchestrator | Thursday 16 January 2025 15:02:26 +0000 (0:00:00.700) 0:03:44.867 ****** 2025-01-16 15:08:15.202368 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.202377 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.202386 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.202404 | orchestrator | 2025-01-16 15:08:15.202413 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-01-16 15:08:15.202421 | orchestrator | Thursday 16 January 2025 15:02:26 +0000 (0:00:00.595) 0:03:45.463 ****** 2025-01-16 15:08:15.202430 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.202439 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.202447 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.202454 | orchestrator | 2025-01-16 15:08:15.202462 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-01-16 15:08:15.202471 | orchestrator | Thursday 16 January 2025 15:02:27 +0000 (0:00:00.423) 0:03:45.886 ****** 2025-01-16 15:08:15.202480 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.202489 | orchestrator | 2025-01-16 15:08:15.202497 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-01-16 15:08:15.202506 | orchestrator | Thursday 16 January 2025 15:02:27 +0000 (0:00:00.603) 0:03:46.490 ****** 2025-01-16 15:08:15.202515 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.202523 | orchestrator | 2025-01-16 15:08:15.202538 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-01-16 15:08:15.202546 | orchestrator | Thursday 16 January 2025 15:02:28 +0000 (0:00:00.119) 0:03:46.610 ****** 2025-01-16 15:08:15.202603 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-01-16 15:08:15.202614 | orchestrator | 2025-01-16 15:08:15.202623 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-01-16 15:08:15.202631 | orchestrator | Thursday 16 January 2025 15:02:28 +0000 (0:00:00.514) 0:03:47.124 ****** 2025-01-16 15:08:15.202640 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.202648 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.202657 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.202665 | orchestrator | 2025-01-16 15:08:15.202673 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-01-16 15:08:15.202681 | orchestrator | Thursday 16 January 2025 15:02:28 +0000 (0:00:00.331) 0:03:47.455 ****** 2025-01-16 15:08:15.202689 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.202697 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.202705 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.202713 | orchestrator | 2025-01-16 15:08:15.202723 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-01-16 15:08:15.202732 | orchestrator | Thursday 16 January 2025 15:02:29 +0000 (0:00:00.458) 0:03:47.914 ****** 2025-01-16 15:08:15.202740 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.202748 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.202757 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.202766 | orchestrator | 2025-01-16 15:08:15.202774 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-01-16 15:08:15.202782 | orchestrator | Thursday 16 January 2025 15:02:30 +0000 (0:00:00.894) 0:03:48.808 ****** 2025-01-16 15:08:15.202790 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.202799 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.202807 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.202815 | orchestrator | 2025-01-16 15:08:15.202879 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-01-16 15:08:15.202892 | orchestrator | Thursday 16 January 2025 15:02:30 +0000 (0:00:00.630) 0:03:49.439 ****** 2025-01-16 15:08:15.202902 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.202911 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.202920 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.202929 | orchestrator | 2025-01-16 15:08:15.202937 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-01-16 15:08:15.202945 | orchestrator | Thursday 16 January 2025 15:02:31 +0000 (0:00:00.567) 0:03:50.007 ****** 2025-01-16 15:08:15.202954 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.202963 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.202981 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.202991 | orchestrator | 2025-01-16 15:08:15.203000 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-01-16 15:08:15.203009 | orchestrator | Thursday 16 January 2025 15:02:32 +0000 (0:00:00.735) 0:03:50.742 ****** 2025-01-16 15:08:15.203017 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.203025 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.203034 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.203049 | orchestrator | 2025-01-16 15:08:15.203056 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-01-16 15:08:15.203065 | orchestrator | Thursday 16 January 2025 15:02:32 +0000 (0:00:00.327) 0:03:51.070 ****** 2025-01-16 15:08:15.203074 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.203083 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.203091 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.203100 | orchestrator | 2025-01-16 15:08:15.203108 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-01-16 15:08:15.203116 | orchestrator | Thursday 16 January 2025 15:02:32 +0000 (0:00:00.298) 0:03:51.368 ****** 2025-01-16 15:08:15.203124 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.203133 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.203142 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.203149 | orchestrator | 2025-01-16 15:08:15.203157 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-01-16 15:08:15.203164 | orchestrator | Thursday 16 January 2025 15:02:33 +0000 (0:00:00.379) 0:03:51.748 ****** 2025-01-16 15:08:15.203172 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.203180 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.203188 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.203195 | orchestrator | 2025-01-16 15:08:15.203203 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-01-16 15:08:15.203211 | orchestrator | Thursday 16 January 2025 15:02:33 +0000 (0:00:00.511) 0:03:52.260 ****** 2025-01-16 15:08:15.203219 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.203226 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.203233 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.203241 | orchestrator | 2025-01-16 15:08:15.203248 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-01-16 15:08:15.203256 | orchestrator | Thursday 16 January 2025 15:02:34 +0000 (0:00:00.823) 0:03:53.083 ****** 2025-01-16 15:08:15.203264 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.203272 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.203279 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.203287 | orchestrator | 2025-01-16 15:08:15.203295 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-01-16 15:08:15.203303 | orchestrator | Thursday 16 January 2025 15:02:34 +0000 (0:00:00.286) 0:03:53.370 ****** 2025-01-16 15:08:15.203312 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.203321 | orchestrator | 2025-01-16 15:08:15.203329 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-01-16 15:08:15.203338 | orchestrator | Thursday 16 January 2025 15:02:35 +0000 (0:00:00.579) 0:03:53.949 ****** 2025-01-16 15:08:15.203346 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.203354 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.203363 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.203371 | orchestrator | 2025-01-16 15:08:15.203379 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-01-16 15:08:15.203387 | orchestrator | Thursday 16 January 2025 15:02:35 +0000 (0:00:00.252) 0:03:54.202 ****** 2025-01-16 15:08:15.203396 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.203404 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.203413 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.203421 | orchestrator | 2025-01-16 15:08:15.203434 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-01-16 15:08:15.203450 | orchestrator | Thursday 16 January 2025 15:02:35 +0000 (0:00:00.263) 0:03:54.466 ****** 2025-01-16 15:08:15.203459 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.203468 | orchestrator | 2025-01-16 15:08:15.203477 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-01-16 15:08:15.203485 | orchestrator | Thursday 16 January 2025 15:02:36 +0000 (0:00:00.596) 0:03:55.062 ****** 2025-01-16 15:08:15.203494 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.203502 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.203511 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.203520 | orchestrator | 2025-01-16 15:08:15.203528 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-01-16 15:08:15.203537 | orchestrator | Thursday 16 January 2025 15:02:37 +0000 (0:00:00.944) 0:03:56.007 ****** 2025-01-16 15:08:15.203545 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.203575 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.203586 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.203594 | orchestrator | 2025-01-16 15:08:15.203603 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-01-16 15:08:15.203611 | orchestrator | Thursday 16 January 2025 15:02:38 +0000 (0:00:00.810) 0:03:56.817 ****** 2025-01-16 15:08:15.203619 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.203628 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.203637 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.203645 | orchestrator | 2025-01-16 15:08:15.203691 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-01-16 15:08:15.203705 | orchestrator | Thursday 16 January 2025 15:02:39 +0000 (0:00:01.415) 0:03:58.233 ****** 2025-01-16 15:08:15.203713 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.203722 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.203731 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.203739 | orchestrator | 2025-01-16 15:08:15.203748 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-01-16 15:08:15.203756 | orchestrator | Thursday 16 January 2025 15:02:41 +0000 (0:00:01.352) 0:03:59.585 ****** 2025-01-16 15:08:15.203765 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.203774 | orchestrator | 2025-01-16 15:08:15.203783 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-01-16 15:08:15.203791 | orchestrator | Thursday 16 January 2025 15:02:41 +0000 (0:00:00.646) 0:04:00.231 ****** 2025-01-16 15:08:15.203800 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-01-16 15:08:15.203808 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.203817 | orchestrator | 2025-01-16 15:08:15.203825 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-01-16 15:08:15.203834 | orchestrator | Thursday 16 January 2025 15:03:02 +0000 (0:00:20.958) 0:04:21.190 ****** 2025-01-16 15:08:15.203842 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.203851 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.203859 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.203868 | orchestrator | 2025-01-16 15:08:15.203876 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-01-16 15:08:15.203885 | orchestrator | Thursday 16 January 2025 15:03:06 +0000 (0:00:04.319) 0:04:25.510 ****** 2025-01-16 15:08:15.203894 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.203902 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.203911 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.203919 | orchestrator | 2025-01-16 15:08:15.203927 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-01-16 15:08:15.203935 | orchestrator | Thursday 16 January 2025 15:03:07 +0000 (0:00:00.985) 0:04:26.495 ****** 2025-01-16 15:08:15.203950 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.203958 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.203966 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.203974 | orchestrator | 2025-01-16 15:08:15.203981 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-01-16 15:08:15.203988 | orchestrator | Thursday 16 January 2025 15:03:08 +0000 (0:00:00.528) 0:04:27.024 ****** 2025-01-16 15:08:15.203996 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.204004 | orchestrator | 2025-01-16 15:08:15.204012 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-01-16 15:08:15.204019 | orchestrator | Thursday 16 January 2025 15:03:09 +0000 (0:00:00.640) 0:04:27.664 ****** 2025-01-16 15:08:15.204027 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.204035 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.204042 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.204050 | orchestrator | 2025-01-16 15:08:15.204058 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-01-16 15:08:15.204066 | orchestrator | Thursday 16 January 2025 15:03:09 +0000 (0:00:00.266) 0:04:27.931 ****** 2025-01-16 15:08:15.204074 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.204082 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.204091 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.204100 | orchestrator | 2025-01-16 15:08:15.204108 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-01-16 15:08:15.204117 | orchestrator | Thursday 16 January 2025 15:03:10 +0000 (0:00:00.909) 0:04:28.840 ****** 2025-01-16 15:08:15.204125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:08:15.204134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:08:15.204143 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:08:15.204151 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204160 | orchestrator | 2025-01-16 15:08:15.204168 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-01-16 15:08:15.204176 | orchestrator | Thursday 16 January 2025 15:03:11 +0000 (0:00:00.709) 0:04:29.550 ****** 2025-01-16 15:08:15.204184 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.204192 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.204207 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.204215 | orchestrator | 2025-01-16 15:08:15.204224 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-01-16 15:08:15.204232 | orchestrator | Thursday 16 January 2025 15:03:11 +0000 (0:00:00.413) 0:04:29.963 ****** 2025-01-16 15:08:15.204241 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.204249 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.204257 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.204266 | orchestrator | 2025-01-16 15:08:15.204277 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-01-16 15:08:15.204286 | orchestrator | 2025-01-16 15:08:15.204295 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-01-16 15:08:15.204303 | orchestrator | Thursday 16 January 2025 15:03:12 +0000 (0:00:01.491) 0:04:31.455 ****** 2025-01-16 15:08:15.204312 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.204321 | orchestrator | 2025-01-16 15:08:15.204329 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-01-16 15:08:15.204337 | orchestrator | Thursday 16 January 2025 15:03:13 +0000 (0:00:00.483) 0:04:31.938 ****** 2025-01-16 15:08:15.204346 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.204354 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.204363 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.204371 | orchestrator | 2025-01-16 15:08:15.204379 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-01-16 15:08:15.204427 | orchestrator | Thursday 16 January 2025 15:03:13 +0000 (0:00:00.475) 0:04:32.414 ****** 2025-01-16 15:08:15.204439 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204447 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.204455 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.204463 | orchestrator | 2025-01-16 15:08:15.204471 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-01-16 15:08:15.204479 | orchestrator | Thursday 16 January 2025 15:03:14 +0000 (0:00:00.259) 0:04:32.674 ****** 2025-01-16 15:08:15.204486 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204494 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.204502 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.204510 | orchestrator | 2025-01-16 15:08:15.204517 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-01-16 15:08:15.204525 | orchestrator | Thursday 16 January 2025 15:03:14 +0000 (0:00:00.215) 0:04:32.890 ****** 2025-01-16 15:08:15.204533 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204540 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.204548 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.204577 | orchestrator | 2025-01-16 15:08:15.204585 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-01-16 15:08:15.204592 | orchestrator | Thursday 16 January 2025 15:03:14 +0000 (0:00:00.388) 0:04:33.278 ****** 2025-01-16 15:08:15.204600 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.204607 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.204615 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.204623 | orchestrator | 2025-01-16 15:08:15.204631 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-01-16 15:08:15.204639 | orchestrator | Thursday 16 January 2025 15:03:15 +0000 (0:00:00.551) 0:04:33.830 ****** 2025-01-16 15:08:15.204647 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204654 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.204662 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.204670 | orchestrator | 2025-01-16 15:08:15.204678 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-01-16 15:08:15.204686 | orchestrator | Thursday 16 January 2025 15:03:15 +0000 (0:00:00.226) 0:04:34.057 ****** 2025-01-16 15:08:15.204693 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204701 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.204708 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.204716 | orchestrator | 2025-01-16 15:08:15.204724 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-01-16 15:08:15.204732 | orchestrator | Thursday 16 January 2025 15:03:15 +0000 (0:00:00.221) 0:04:34.278 ****** 2025-01-16 15:08:15.204739 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204747 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.204754 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.204762 | orchestrator | 2025-01-16 15:08:15.204770 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-01-16 15:08:15.204777 | orchestrator | Thursday 16 January 2025 15:03:16 +0000 (0:00:00.418) 0:04:34.697 ****** 2025-01-16 15:08:15.204784 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204791 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.204799 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.204806 | orchestrator | 2025-01-16 15:08:15.204814 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-01-16 15:08:15.204823 | orchestrator | Thursday 16 January 2025 15:03:16 +0000 (0:00:00.291) 0:04:34.989 ****** 2025-01-16 15:08:15.204830 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204838 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.204845 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.204853 | orchestrator | 2025-01-16 15:08:15.204860 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-01-16 15:08:15.204868 | orchestrator | Thursday 16 January 2025 15:03:16 +0000 (0:00:00.231) 0:04:35.221 ****** 2025-01-16 15:08:15.204883 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.204891 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.204898 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.204906 | orchestrator | 2025-01-16 15:08:15.204914 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-01-16 15:08:15.204921 | orchestrator | Thursday 16 January 2025 15:03:17 +0000 (0:00:00.569) 0:04:35.790 ****** 2025-01-16 15:08:15.204929 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.204937 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.204946 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.204954 | orchestrator | 2025-01-16 15:08:15.204963 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-01-16 15:08:15.204972 | orchestrator | Thursday 16 January 2025 15:03:17 +0000 (0:00:00.368) 0:04:36.159 ****** 2025-01-16 15:08:15.204981 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.204990 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.204998 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.205006 | orchestrator | 2025-01-16 15:08:15.205015 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-01-16 15:08:15.205023 | orchestrator | Thursday 16 January 2025 15:03:17 +0000 (0:00:00.235) 0:04:36.394 ****** 2025-01-16 15:08:15.205031 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205039 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205046 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205054 | orchestrator | 2025-01-16 15:08:15.205061 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-01-16 15:08:15.205075 | orchestrator | Thursday 16 January 2025 15:03:18 +0000 (0:00:00.209) 0:04:36.603 ****** 2025-01-16 15:08:15.205083 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205091 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205098 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205106 | orchestrator | 2025-01-16 15:08:15.205113 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-01-16 15:08:15.205121 | orchestrator | Thursday 16 January 2025 15:03:18 +0000 (0:00:00.233) 0:04:36.837 ****** 2025-01-16 15:08:15.205129 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205139 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205147 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205154 | orchestrator | 2025-01-16 15:08:15.205162 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-01-16 15:08:15.205213 | orchestrator | Thursday 16 January 2025 15:03:18 +0000 (0:00:00.442) 0:04:37.280 ****** 2025-01-16 15:08:15.205223 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205232 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205239 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205247 | orchestrator | 2025-01-16 15:08:15.205254 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-01-16 15:08:15.205262 | orchestrator | Thursday 16 January 2025 15:03:18 +0000 (0:00:00.256) 0:04:37.536 ****** 2025-01-16 15:08:15.205269 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205276 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205283 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205291 | orchestrator | 2025-01-16 15:08:15.205299 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-01-16 15:08:15.205307 | orchestrator | Thursday 16 January 2025 15:03:19 +0000 (0:00:00.296) 0:04:37.833 ****** 2025-01-16 15:08:15.205324 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.205332 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.205340 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.205348 | orchestrator | 2025-01-16 15:08:15.205356 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-01-16 15:08:15.205364 | orchestrator | Thursday 16 January 2025 15:03:19 +0000 (0:00:00.246) 0:04:38.080 ****** 2025-01-16 15:08:15.205372 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.205386 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.205395 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.205402 | orchestrator | 2025-01-16 15:08:15.205410 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-01-16 15:08:15.205419 | orchestrator | Thursday 16 January 2025 15:03:19 +0000 (0:00:00.381) 0:04:38.462 ****** 2025-01-16 15:08:15.205426 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205434 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205442 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205450 | orchestrator | 2025-01-16 15:08:15.205458 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-01-16 15:08:15.205465 | orchestrator | Thursday 16 January 2025 15:03:20 +0000 (0:00:00.228) 0:04:38.690 ****** 2025-01-16 15:08:15.205473 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205480 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205488 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205497 | orchestrator | 2025-01-16 15:08:15.205504 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-01-16 15:08:15.205512 | orchestrator | Thursday 16 January 2025 15:03:20 +0000 (0:00:00.230) 0:04:38.920 ****** 2025-01-16 15:08:15.205520 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205527 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205535 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205542 | orchestrator | 2025-01-16 15:08:15.205550 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-01-16 15:08:15.205608 | orchestrator | Thursday 16 January 2025 15:03:20 +0000 (0:00:00.227) 0:04:39.148 ****** 2025-01-16 15:08:15.205616 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205623 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205631 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205639 | orchestrator | 2025-01-16 15:08:15.205646 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-01-16 15:08:15.205654 | orchestrator | Thursday 16 January 2025 15:03:20 +0000 (0:00:00.376) 0:04:39.525 ****** 2025-01-16 15:08:15.205662 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205670 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205678 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205686 | orchestrator | 2025-01-16 15:08:15.205694 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-01-16 15:08:15.205701 | orchestrator | Thursday 16 January 2025 15:03:21 +0000 (0:00:00.228) 0:04:39.753 ****** 2025-01-16 15:08:15.205709 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205717 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205725 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205733 | orchestrator | 2025-01-16 15:08:15.205741 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-01-16 15:08:15.205749 | orchestrator | Thursday 16 January 2025 15:03:21 +0000 (0:00:00.211) 0:04:39.965 ****** 2025-01-16 15:08:15.205757 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205764 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205772 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205779 | orchestrator | 2025-01-16 15:08:15.205786 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-01-16 15:08:15.205794 | orchestrator | Thursday 16 January 2025 15:03:21 +0000 (0:00:00.219) 0:04:40.184 ****** 2025-01-16 15:08:15.205802 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205810 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205817 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205824 | orchestrator | 2025-01-16 15:08:15.205832 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-01-16 15:08:15.205839 | orchestrator | Thursday 16 January 2025 15:03:21 +0000 (0:00:00.348) 0:04:40.532 ****** 2025-01-16 15:08:15.205847 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205864 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205873 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205880 | orchestrator | 2025-01-16 15:08:15.205888 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-01-16 15:08:15.205896 | orchestrator | Thursday 16 January 2025 15:03:22 +0000 (0:00:00.333) 0:04:40.866 ****** 2025-01-16 15:08:15.205905 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205913 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.205921 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.205929 | orchestrator | 2025-01-16 15:08:15.205936 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-01-16 15:08:15.205944 | orchestrator | Thursday 16 January 2025 15:03:22 +0000 (0:00:00.320) 0:04:41.186 ****** 2025-01-16 15:08:15.205952 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.205959 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206007 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206032 | orchestrator | 2025-01-16 15:08:15.206039 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-01-16 15:08:15.206044 | orchestrator | Thursday 16 January 2025 15:03:22 +0000 (0:00:00.269) 0:04:41.456 ****** 2025-01-16 15:08:15.206050 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206058 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206065 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206073 | orchestrator | 2025-01-16 15:08:15.206086 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-01-16 15:08:15.206095 | orchestrator | Thursday 16 January 2025 15:03:23 +0000 (0:00:00.454) 0:04:41.910 ****** 2025-01-16 15:08:15.206103 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.206111 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.206118 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206130 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.206138 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.206146 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206155 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.206163 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.206172 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206180 | orchestrator | 2025-01-16 15:08:15.206189 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-01-16 15:08:15.206197 | orchestrator | Thursday 16 January 2025 15:03:23 +0000 (0:00:00.332) 0:04:42.242 ****** 2025-01-16 15:08:15.206205 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-01-16 15:08:15.206213 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-01-16 15:08:15.206221 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206230 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-01-16 15:08:15.206239 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-01-16 15:08:15.206247 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206255 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-01-16 15:08:15.206264 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-01-16 15:08:15.206272 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206281 | orchestrator | 2025-01-16 15:08:15.206289 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-01-16 15:08:15.206298 | orchestrator | Thursday 16 January 2025 15:03:24 +0000 (0:00:00.310) 0:04:42.553 ****** 2025-01-16 15:08:15.206306 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206314 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206323 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206331 | orchestrator | 2025-01-16 15:08:15.206340 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-01-16 15:08:15.206356 | orchestrator | Thursday 16 January 2025 15:03:24 +0000 (0:00:00.324) 0:04:42.878 ****** 2025-01-16 15:08:15.206365 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206373 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206381 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206390 | orchestrator | 2025-01-16 15:08:15.206398 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:08:15.206407 | orchestrator | Thursday 16 January 2025 15:03:24 +0000 (0:00:00.541) 0:04:43.419 ****** 2025-01-16 15:08:15.206415 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206423 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206432 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206441 | orchestrator | 2025-01-16 15:08:15.206450 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:08:15.206459 | orchestrator | Thursday 16 January 2025 15:03:25 +0000 (0:00:00.282) 0:04:43.702 ****** 2025-01-16 15:08:15.206468 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206477 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206486 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206495 | orchestrator | 2025-01-16 15:08:15.206504 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:08:15.206513 | orchestrator | Thursday 16 January 2025 15:03:25 +0000 (0:00:00.237) 0:04:43.939 ****** 2025-01-16 15:08:15.206522 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206531 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206539 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206547 | orchestrator | 2025-01-16 15:08:15.206573 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:08:15.206581 | orchestrator | Thursday 16 January 2025 15:03:25 +0000 (0:00:00.251) 0:04:44.191 ****** 2025-01-16 15:08:15.206589 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206598 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206606 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206615 | orchestrator | 2025-01-16 15:08:15.206623 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:08:15.206632 | orchestrator | Thursday 16 January 2025 15:03:26 +0000 (0:00:00.465) 0:04:44.656 ****** 2025-01-16 15:08:15.206640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.206649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.206657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.206666 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206676 | orchestrator | 2025-01-16 15:08:15.206684 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:08:15.206693 | orchestrator | Thursday 16 January 2025 15:03:26 +0000 (0:00:00.319) 0:04:44.976 ****** 2025-01-16 15:08:15.206702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.206711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.206720 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.206728 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206737 | orchestrator | 2025-01-16 15:08:15.206775 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:08:15.206784 | orchestrator | Thursday 16 January 2025 15:03:26 +0000 (0:00:00.314) 0:04:45.291 ****** 2025-01-16 15:08:15.206792 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.206800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.206807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.206815 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206823 | orchestrator | 2025-01-16 15:08:15.206830 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.206844 | orchestrator | Thursday 16 January 2025 15:03:27 +0000 (0:00:00.310) 0:04:45.601 ****** 2025-01-16 15:08:15.206852 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206860 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206868 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206876 | orchestrator | 2025-01-16 15:08:15.206884 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:08:15.206891 | orchestrator | Thursday 16 January 2025 15:03:27 +0000 (0:00:00.226) 0:04:45.828 ****** 2025-01-16 15:08:15.206899 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.206906 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206915 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.206923 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206931 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.206939 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206948 | orchestrator | 2025-01-16 15:08:15.206956 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:08:15.206965 | orchestrator | Thursday 16 January 2025 15:03:27 +0000 (0:00:00.376) 0:04:46.205 ****** 2025-01-16 15:08:15.206973 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.206982 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.206990 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.206998 | orchestrator | 2025-01-16 15:08:15.207007 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.207015 | orchestrator | Thursday 16 January 2025 15:03:28 +0000 (0:00:00.372) 0:04:46.577 ****** 2025-01-16 15:08:15.207024 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207032 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207041 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207049 | orchestrator | 2025-01-16 15:08:15.207058 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:08:15.207067 | orchestrator | Thursday 16 January 2025 15:03:28 +0000 (0:00:00.235) 0:04:46.813 ****** 2025-01-16 15:08:15.207075 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.207083 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207092 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.207100 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207109 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.207118 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207126 | orchestrator | 2025-01-16 15:08:15.207135 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:08:15.207147 | orchestrator | Thursday 16 January 2025 15:03:28 +0000 (0:00:00.339) 0:04:47.152 ****** 2025-01-16 15:08:15.207156 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207165 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207173 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207182 | orchestrator | 2025-01-16 15:08:15.207191 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:08:15.207199 | orchestrator | Thursday 16 January 2025 15:03:28 +0000 (0:00:00.270) 0:04:47.423 ****** 2025-01-16 15:08:15.207208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.207217 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.207226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.207234 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207243 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-01-16 15:08:15.207252 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-01-16 15:08:15.207260 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-01-16 15:08:15.207269 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207278 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-01-16 15:08:15.207296 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-01-16 15:08:15.207304 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-01-16 15:08:15.207311 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207319 | orchestrator | 2025-01-16 15:08:15.207327 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-01-16 15:08:15.207334 | orchestrator | Thursday 16 January 2025 15:03:29 +0000 (0:00:00.613) 0:04:48.036 ****** 2025-01-16 15:08:15.207341 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207349 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207356 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207363 | orchestrator | 2025-01-16 15:08:15.207371 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-01-16 15:08:15.207378 | orchestrator | Thursday 16 January 2025 15:03:29 +0000 (0:00:00.435) 0:04:48.471 ****** 2025-01-16 15:08:15.207386 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207397 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207406 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207413 | orchestrator | 2025-01-16 15:08:15.207421 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-01-16 15:08:15.207429 | orchestrator | Thursday 16 January 2025 15:03:30 +0000 (0:00:00.580) 0:04:49.052 ****** 2025-01-16 15:08:15.207437 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207445 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207449 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207454 | orchestrator | 2025-01-16 15:08:15.207459 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-01-16 15:08:15.207493 | orchestrator | Thursday 16 January 2025 15:03:31 +0000 (0:00:00.580) 0:04:49.632 ****** 2025-01-16 15:08:15.207499 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207504 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207509 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207514 | orchestrator | 2025-01-16 15:08:15.207519 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-01-16 15:08:15.207524 | orchestrator | Thursday 16 January 2025 15:03:31 +0000 (0:00:00.397) 0:04:50.030 ****** 2025-01-16 15:08:15.207529 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:08:15.207534 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:08:15.207540 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:08:15.207545 | orchestrator | 2025-01-16 15:08:15.207549 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-01-16 15:08:15.207571 | orchestrator | Thursday 16 January 2025 15:03:32 +0000 (0:00:00.549) 0:04:50.579 ****** 2025-01-16 15:08:15.207576 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.207581 | orchestrator | 2025-01-16 15:08:15.207586 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-01-16 15:08:15.207591 | orchestrator | Thursday 16 January 2025 15:03:32 +0000 (0:00:00.674) 0:04:51.254 ****** 2025-01-16 15:08:15.207596 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.207601 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.207606 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.207611 | orchestrator | 2025-01-16 15:08:15.207616 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-01-16 15:08:15.207620 | orchestrator | Thursday 16 January 2025 15:03:33 +0000 (0:00:00.562) 0:04:51.817 ****** 2025-01-16 15:08:15.207625 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207630 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207635 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207640 | orchestrator | 2025-01-16 15:08:15.207646 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-01-16 15:08:15.207654 | orchestrator | Thursday 16 January 2025 15:03:33 +0000 (0:00:00.418) 0:04:52.235 ****** 2025-01-16 15:08:15.207670 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-01-16 15:08:15.207678 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-01-16 15:08:15.207686 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-01-16 15:08:15.207693 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-01-16 15:08:15.207700 | orchestrator | 2025-01-16 15:08:15.207707 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-01-16 15:08:15.207714 | orchestrator | Thursday 16 January 2025 15:03:38 +0000 (0:00:04.876) 0:04:57.112 ****** 2025-01-16 15:08:15.207722 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.207734 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.207742 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.207749 | orchestrator | 2025-01-16 15:08:15.207757 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-01-16 15:08:15.207765 | orchestrator | Thursday 16 January 2025 15:03:38 +0000 (0:00:00.352) 0:04:57.464 ****** 2025-01-16 15:08:15.207773 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-01-16 15:08:15.207780 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-01-16 15:08:15.207787 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-01-16 15:08:15.207795 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:08:15.207802 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-01-16 15:08:15.207811 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:08:15.207820 | orchestrator | 2025-01-16 15:08:15.207828 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-01-16 15:08:15.207835 | orchestrator | Thursday 16 January 2025 15:03:40 +0000 (0:00:01.250) 0:04:58.715 ****** 2025-01-16 15:08:15.207843 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-01-16 15:08:15.207850 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-01-16 15:08:15.207858 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-01-16 15:08:15.207866 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-01-16 15:08:15.207875 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-01-16 15:08:15.207883 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-01-16 15:08:15.207891 | orchestrator | 2025-01-16 15:08:15.207899 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-01-16 15:08:15.207911 | orchestrator | Thursday 16 January 2025 15:03:40 +0000 (0:00:00.753) 0:04:59.468 ****** 2025-01-16 15:08:15.207919 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.207927 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.207938 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.207945 | orchestrator | 2025-01-16 15:08:15.207953 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-01-16 15:08:15.207962 | orchestrator | Thursday 16 January 2025 15:03:41 +0000 (0:00:00.611) 0:05:00.079 ****** 2025-01-16 15:08:15.207970 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.207977 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.207985 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.207992 | orchestrator | 2025-01-16 15:08:15.208000 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-01-16 15:08:15.208009 | orchestrator | Thursday 16 January 2025 15:03:41 +0000 (0:00:00.215) 0:05:00.295 ****** 2025-01-16 15:08:15.208016 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.208023 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.208032 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.208039 | orchestrator | 2025-01-16 15:08:15.208047 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-01-16 15:08:15.208055 | orchestrator | Thursday 16 January 2025 15:03:41 +0000 (0:00:00.221) 0:05:00.517 ****** 2025-01-16 15:08:15.208095 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.208111 | orchestrator | 2025-01-16 15:08:15.208119 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-01-16 15:08:15.208128 | orchestrator | Thursday 16 January 2025 15:03:42 +0000 (0:00:00.526) 0:05:01.043 ****** 2025-01-16 15:08:15.208136 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.208144 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.208152 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.208160 | orchestrator | 2025-01-16 15:08:15.208168 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-01-16 15:08:15.208176 | orchestrator | Thursday 16 January 2025 15:03:42 +0000 (0:00:00.226) 0:05:01.270 ****** 2025-01-16 15:08:15.208184 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.208192 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.208200 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.208208 | orchestrator | 2025-01-16 15:08:15.208215 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-01-16 15:08:15.208223 | orchestrator | Thursday 16 January 2025 15:03:42 +0000 (0:00:00.240) 0:05:01.510 ****** 2025-01-16 15:08:15.208231 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.208240 | orchestrator | 2025-01-16 15:08:15.208248 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-01-16 15:08:15.208255 | orchestrator | Thursday 16 January 2025 15:03:43 +0000 (0:00:00.519) 0:05:02.030 ****** 2025-01-16 15:08:15.208263 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.208271 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.208279 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.208287 | orchestrator | 2025-01-16 15:08:15.208295 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-01-16 15:08:15.208303 | orchestrator | Thursday 16 January 2025 15:03:44 +0000 (0:00:00.788) 0:05:02.819 ****** 2025-01-16 15:08:15.208311 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.208320 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.208327 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.208335 | orchestrator | 2025-01-16 15:08:15.208342 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-01-16 15:08:15.208350 | orchestrator | Thursday 16 January 2025 15:03:45 +0000 (0:00:00.924) 0:05:03.744 ****** 2025-01-16 15:08:15.208357 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.208365 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.208373 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.208380 | orchestrator | 2025-01-16 15:08:15.208387 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-01-16 15:08:15.208395 | orchestrator | Thursday 16 January 2025 15:03:46 +0000 (0:00:01.148) 0:05:04.893 ****** 2025-01-16 15:08:15.208403 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.208411 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.208418 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.208426 | orchestrator | 2025-01-16 15:08:15.208434 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-01-16 15:08:15.208441 | orchestrator | Thursday 16 January 2025 15:03:47 +0000 (0:00:01.253) 0:05:06.147 ****** 2025-01-16 15:08:15.208449 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.208457 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.208465 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-01-16 15:08:15.208473 | orchestrator | 2025-01-16 15:08:15.208480 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-01-16 15:08:15.208497 | orchestrator | Thursday 16 January 2025 15:03:47 +0000 (0:00:00.395) 0:05:06.542 ****** 2025-01-16 15:08:15.208505 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-01-16 15:08:15.208514 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-01-16 15:08:15.208528 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.208536 | orchestrator | 2025-01-16 15:08:15.208544 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-01-16 15:08:15.208605 | orchestrator | Thursday 16 January 2025 15:04:00 +0000 (0:00:12.653) 0:05:19.196 ****** 2025-01-16 15:08:15.208618 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.208626 | orchestrator | 2025-01-16 15:08:15.208635 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-01-16 15:08:15.208644 | orchestrator | Thursday 16 January 2025 15:04:01 +0000 (0:00:01.122) 0:05:20.318 ****** 2025-01-16 15:08:15.208652 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.208660 | orchestrator | 2025-01-16 15:08:15.208668 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-01-16 15:08:15.208676 | orchestrator | Thursday 16 January 2025 15:04:02 +0000 (0:00:00.302) 0:05:20.621 ****** 2025-01-16 15:08:15.208684 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.208693 | orchestrator | 2025-01-16 15:08:15.208706 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-01-16 15:08:15.208715 | orchestrator | Thursday 16 January 2025 15:04:02 +0000 (0:00:00.200) 0:05:20.821 ****** 2025-01-16 15:08:15.208722 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-01-16 15:08:15.208729 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-01-16 15:08:15.208737 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-01-16 15:08:15.208745 | orchestrator | 2025-01-16 15:08:15.208752 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-01-16 15:08:15.208760 | orchestrator | Thursday 16 January 2025 15:04:08 +0000 (0:00:06.278) 0:05:27.100 ****** 2025-01-16 15:08:15.208767 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-01-16 15:08:15.208816 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-01-16 15:08:15.208827 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-01-16 15:08:15.208836 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-01-16 15:08:15.208844 | orchestrator | 2025-01-16 15:08:15.208852 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-01-16 15:08:15.208860 | orchestrator | Thursday 16 January 2025 15:04:12 +0000 (0:00:04.438) 0:05:31.538 ****** 2025-01-16 15:08:15.208867 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.208876 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.208883 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.208932 | orchestrator | 2025-01-16 15:08:15.208941 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-01-16 15:08:15.208948 | orchestrator | Thursday 16 January 2025 15:04:13 +0000 (0:00:00.602) 0:05:32.141 ****** 2025-01-16 15:08:15.208956 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.208963 | orchestrator | 2025-01-16 15:08:15.208970 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-01-16 15:08:15.208978 | orchestrator | Thursday 16 January 2025 15:04:13 +0000 (0:00:00.378) 0:05:32.519 ****** 2025-01-16 15:08:15.208986 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.208995 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.209002 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.209009 | orchestrator | 2025-01-16 15:08:15.209017 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-01-16 15:08:15.209025 | orchestrator | Thursday 16 January 2025 15:04:14 +0000 (0:00:00.254) 0:05:32.773 ****** 2025-01-16 15:08:15.209033 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.209041 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.209048 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.209064 | orchestrator | 2025-01-16 15:08:15.209072 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-01-16 15:08:15.209079 | orchestrator | Thursday 16 January 2025 15:04:15 +0000 (0:00:00.917) 0:05:33.691 ****** 2025-01-16 15:08:15.209086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:08:15.209094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:08:15.209101 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:08:15.209109 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.209116 | orchestrator | 2025-01-16 15:08:15.209124 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-01-16 15:08:15.209133 | orchestrator | Thursday 16 January 2025 15:04:15 +0000 (0:00:00.516) 0:05:34.208 ****** 2025-01-16 15:08:15.209138 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.209146 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.209154 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.209161 | orchestrator | 2025-01-16 15:08:15.209169 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-01-16 15:08:15.209176 | orchestrator | Thursday 16 January 2025 15:04:15 +0000 (0:00:00.284) 0:05:34.493 ****** 2025-01-16 15:08:15.209183 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.209191 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.209199 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.209206 | orchestrator | 2025-01-16 15:08:15.209214 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-01-16 15:08:15.209223 | orchestrator | 2025-01-16 15:08:15.209230 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-01-16 15:08:15.209239 | orchestrator | Thursday 16 January 2025 15:04:17 +0000 (0:00:01.396) 0:05:35.890 ****** 2025-01-16 15:08:15.209246 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.209252 | orchestrator | 2025-01-16 15:08:15.209256 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-01-16 15:08:15.209262 | orchestrator | Thursday 16 January 2025 15:04:17 +0000 (0:00:00.471) 0:05:36.362 ****** 2025-01-16 15:08:15.209270 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209278 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209285 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209293 | orchestrator | 2025-01-16 15:08:15.209301 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-01-16 15:08:15.209309 | orchestrator | Thursday 16 January 2025 15:04:18 +0000 (0:00:00.198) 0:05:36.560 ****** 2025-01-16 15:08:15.209317 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.209325 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.209332 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.209341 | orchestrator | 2025-01-16 15:08:15.209348 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-01-16 15:08:15.209361 | orchestrator | Thursday 16 January 2025 15:04:18 +0000 (0:00:00.448) 0:05:37.008 ****** 2025-01-16 15:08:15.209367 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.209372 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.209377 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.209382 | orchestrator | 2025-01-16 15:08:15.209387 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-01-16 15:08:15.209391 | orchestrator | Thursday 16 January 2025 15:04:19 +0000 (0:00:00.679) 0:05:37.687 ****** 2025-01-16 15:08:15.209396 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.209401 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.209406 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.209411 | orchestrator | 2025-01-16 15:08:15.209416 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-01-16 15:08:15.209421 | orchestrator | Thursday 16 January 2025 15:04:19 +0000 (0:00:00.458) 0:05:38.145 ****** 2025-01-16 15:08:15.209425 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209435 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209440 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209445 | orchestrator | 2025-01-16 15:08:15.209450 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-01-16 15:08:15.209455 | orchestrator | Thursday 16 January 2025 15:04:19 +0000 (0:00:00.226) 0:05:38.372 ****** 2025-01-16 15:08:15.209496 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209501 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209506 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209511 | orchestrator | 2025-01-16 15:08:15.209516 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-01-16 15:08:15.209521 | orchestrator | Thursday 16 January 2025 15:04:20 +0000 (0:00:00.444) 0:05:38.816 ****** 2025-01-16 15:08:15.209526 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209531 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209536 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209540 | orchestrator | 2025-01-16 15:08:15.209545 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-01-16 15:08:15.209550 | orchestrator | Thursday 16 January 2025 15:04:20 +0000 (0:00:00.217) 0:05:39.033 ****** 2025-01-16 15:08:15.209573 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209578 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209583 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209588 | orchestrator | 2025-01-16 15:08:15.209593 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-01-16 15:08:15.209598 | orchestrator | Thursday 16 January 2025 15:04:20 +0000 (0:00:00.216) 0:05:39.250 ****** 2025-01-16 15:08:15.209603 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209608 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209613 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209618 | orchestrator | 2025-01-16 15:08:15.209623 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-01-16 15:08:15.209628 | orchestrator | Thursday 16 January 2025 15:04:20 +0000 (0:00:00.224) 0:05:39.474 ****** 2025-01-16 15:08:15.209633 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209637 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209642 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209647 | orchestrator | 2025-01-16 15:08:15.209652 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-01-16 15:08:15.209657 | orchestrator | Thursday 16 January 2025 15:04:21 +0000 (0:00:00.388) 0:05:39.863 ****** 2025-01-16 15:08:15.209662 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.209667 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.209672 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.209677 | orchestrator | 2025-01-16 15:08:15.209682 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-01-16 15:08:15.209687 | orchestrator | Thursday 16 January 2025 15:04:21 +0000 (0:00:00.449) 0:05:40.313 ****** 2025-01-16 15:08:15.209692 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209697 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209702 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209706 | orchestrator | 2025-01-16 15:08:15.209711 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-01-16 15:08:15.209716 | orchestrator | Thursday 16 January 2025 15:04:21 +0000 (0:00:00.203) 0:05:40.516 ****** 2025-01-16 15:08:15.209721 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209730 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209735 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209739 | orchestrator | 2025-01-16 15:08:15.209744 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-01-16 15:08:15.209749 | orchestrator | Thursday 16 January 2025 15:04:22 +0000 (0:00:00.198) 0:05:40.715 ****** 2025-01-16 15:08:15.209754 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.209759 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.209768 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.209775 | orchestrator | 2025-01-16 15:08:15.209782 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-01-16 15:08:15.209790 | orchestrator | Thursday 16 January 2025 15:04:22 +0000 (0:00:00.379) 0:05:41.095 ****** 2025-01-16 15:08:15.209797 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.209805 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.209812 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.209819 | orchestrator | 2025-01-16 15:08:15.209827 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-01-16 15:08:15.209835 | orchestrator | Thursday 16 January 2025 15:04:22 +0000 (0:00:00.225) 0:05:41.321 ****** 2025-01-16 15:08:15.209843 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.209850 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.209859 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.209865 | orchestrator | 2025-01-16 15:08:15.209870 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-01-16 15:08:15.209875 | orchestrator | Thursday 16 January 2025 15:04:23 +0000 (0:00:00.249) 0:05:41.571 ****** 2025-01-16 15:08:15.209880 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209885 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209890 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209895 | orchestrator | 2025-01-16 15:08:15.209899 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-01-16 15:08:15.209904 | orchestrator | Thursday 16 January 2025 15:04:23 +0000 (0:00:00.274) 0:05:41.845 ****** 2025-01-16 15:08:15.209909 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209914 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209919 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209924 | orchestrator | 2025-01-16 15:08:15.209929 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-01-16 15:08:15.209936 | orchestrator | Thursday 16 January 2025 15:04:23 +0000 (0:00:00.347) 0:05:42.192 ****** 2025-01-16 15:08:15.209941 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.209946 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.209951 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.209956 | orchestrator | 2025-01-16 15:08:15.209961 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-01-16 15:08:15.209966 | orchestrator | Thursday 16 January 2025 15:04:23 +0000 (0:00:00.211) 0:05:42.404 ****** 2025-01-16 15:08:15.209970 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.209975 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.209980 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.209985 | orchestrator | 2025-01-16 15:08:15.209990 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-01-16 15:08:15.209995 | orchestrator | Thursday 16 January 2025 15:04:24 +0000 (0:00:00.269) 0:05:42.673 ****** 2025-01-16 15:08:15.209999 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210066 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210074 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210079 | orchestrator | 2025-01-16 15:08:15.210084 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-01-16 15:08:15.210089 | orchestrator | Thursday 16 January 2025 15:04:24 +0000 (0:00:00.272) 0:05:42.946 ****** 2025-01-16 15:08:15.210093 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210098 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210103 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210108 | orchestrator | 2025-01-16 15:08:15.210113 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-01-16 15:08:15.210118 | orchestrator | Thursday 16 January 2025 15:04:24 +0000 (0:00:00.428) 0:05:43.374 ****** 2025-01-16 15:08:15.210123 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210128 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210132 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210142 | orchestrator | 2025-01-16 15:08:15.210147 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-01-16 15:08:15.210152 | orchestrator | Thursday 16 January 2025 15:04:25 +0000 (0:00:00.270) 0:05:43.644 ****** 2025-01-16 15:08:15.210157 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210162 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210167 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210172 | orchestrator | 2025-01-16 15:08:15.210177 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-01-16 15:08:15.210181 | orchestrator | Thursday 16 January 2025 15:04:25 +0000 (0:00:00.229) 0:05:43.873 ****** 2025-01-16 15:08:15.210186 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210191 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210196 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210201 | orchestrator | 2025-01-16 15:08:15.210206 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-01-16 15:08:15.210211 | orchestrator | Thursday 16 January 2025 15:04:25 +0000 (0:00:00.255) 0:05:44.128 ****** 2025-01-16 15:08:15.210216 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210221 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210225 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210230 | orchestrator | 2025-01-16 15:08:15.210235 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-01-16 15:08:15.210240 | orchestrator | Thursday 16 January 2025 15:04:25 +0000 (0:00:00.373) 0:05:44.502 ****** 2025-01-16 15:08:15.210245 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210250 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210255 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210260 | orchestrator | 2025-01-16 15:08:15.210264 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-01-16 15:08:15.210270 | orchestrator | Thursday 16 January 2025 15:04:26 +0000 (0:00:00.341) 0:05:44.844 ****** 2025-01-16 15:08:15.210275 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210280 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210285 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210289 | orchestrator | 2025-01-16 15:08:15.210294 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-01-16 15:08:15.210299 | orchestrator | Thursday 16 January 2025 15:04:26 +0000 (0:00:00.281) 0:05:45.125 ****** 2025-01-16 15:08:15.210304 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210309 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210314 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210319 | orchestrator | 2025-01-16 15:08:15.210324 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-01-16 15:08:15.210329 | orchestrator | Thursday 16 January 2025 15:04:26 +0000 (0:00:00.220) 0:05:45.346 ****** 2025-01-16 15:08:15.210337 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210345 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210350 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210354 | orchestrator | 2025-01-16 15:08:15.210359 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-01-16 15:08:15.210364 | orchestrator | Thursday 16 January 2025 15:04:27 +0000 (0:00:00.421) 0:05:45.767 ****** 2025-01-16 15:08:15.210369 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210374 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210379 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210383 | orchestrator | 2025-01-16 15:08:15.210388 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-01-16 15:08:15.210393 | orchestrator | Thursday 16 January 2025 15:04:27 +0000 (0:00:00.344) 0:05:46.111 ****** 2025-01-16 15:08:15.210398 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210403 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210408 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210417 | orchestrator | 2025-01-16 15:08:15.210422 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-01-16 15:08:15.210427 | orchestrator | Thursday 16 January 2025 15:04:27 +0000 (0:00:00.282) 0:05:46.394 ****** 2025-01-16 15:08:15.210432 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.210437 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.210443 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210450 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.210458 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.210465 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.210473 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210480 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.210488 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210496 | orchestrator | 2025-01-16 15:08:15.210502 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-01-16 15:08:15.210507 | orchestrator | Thursday 16 January 2025 15:04:28 +0000 (0:00:00.274) 0:05:46.668 ****** 2025-01-16 15:08:15.210512 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-01-16 15:08:15.210517 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-01-16 15:08:15.210538 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210544 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-01-16 15:08:15.210549 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-01-16 15:08:15.210569 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210578 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-01-16 15:08:15.210583 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-01-16 15:08:15.210588 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210593 | orchestrator | 2025-01-16 15:08:15.210598 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-01-16 15:08:15.210603 | orchestrator | Thursday 16 January 2025 15:04:28 +0000 (0:00:00.370) 0:05:47.038 ****** 2025-01-16 15:08:15.210607 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210612 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210617 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210622 | orchestrator | 2025-01-16 15:08:15.210627 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-01-16 15:08:15.210632 | orchestrator | Thursday 16 January 2025 15:04:28 +0000 (0:00:00.228) 0:05:47.267 ****** 2025-01-16 15:08:15.210636 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210641 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210646 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210651 | orchestrator | 2025-01-16 15:08:15.210656 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:08:15.210661 | orchestrator | Thursday 16 January 2025 15:04:28 +0000 (0:00:00.236) 0:05:47.504 ****** 2025-01-16 15:08:15.210666 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210671 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210676 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210680 | orchestrator | 2025-01-16 15:08:15.210688 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:08:15.210693 | orchestrator | Thursday 16 January 2025 15:04:29 +0000 (0:00:00.214) 0:05:47.718 ****** 2025-01-16 15:08:15.210698 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210703 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210708 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210713 | orchestrator | 2025-01-16 15:08:15.210718 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:08:15.210723 | orchestrator | Thursday 16 January 2025 15:04:29 +0000 (0:00:00.363) 0:05:48.082 ****** 2025-01-16 15:08:15.210734 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210739 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210744 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210749 | orchestrator | 2025-01-16 15:08:15.210754 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:08:15.210759 | orchestrator | Thursday 16 January 2025 15:04:29 +0000 (0:00:00.221) 0:05:48.303 ****** 2025-01-16 15:08:15.210763 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210768 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210773 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210778 | orchestrator | 2025-01-16 15:08:15.210783 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:08:15.210788 | orchestrator | Thursday 16 January 2025 15:04:29 +0000 (0:00:00.221) 0:05:48.525 ****** 2025-01-16 15:08:15.210793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.210797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.210802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.210807 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210812 | orchestrator | 2025-01-16 15:08:15.210817 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:08:15.210822 | orchestrator | Thursday 16 January 2025 15:04:30 +0000 (0:00:00.294) 0:05:48.819 ****** 2025-01-16 15:08:15.210826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.210831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.210836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.210841 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210846 | orchestrator | 2025-01-16 15:08:15.210850 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:08:15.210855 | orchestrator | Thursday 16 January 2025 15:04:30 +0000 (0:00:00.290) 0:05:49.110 ****** 2025-01-16 15:08:15.210860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.210865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.210870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.210875 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210879 | orchestrator | 2025-01-16 15:08:15.210884 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.210889 | orchestrator | Thursday 16 January 2025 15:04:30 +0000 (0:00:00.291) 0:05:49.402 ****** 2025-01-16 15:08:15.210894 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210899 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210904 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210908 | orchestrator | 2025-01-16 15:08:15.210913 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:08:15.210918 | orchestrator | Thursday 16 January 2025 15:04:31 +0000 (0:00:00.378) 0:05:49.780 ****** 2025-01-16 15:08:15.210923 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.210928 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210933 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.210938 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210942 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.210947 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210952 | orchestrator | 2025-01-16 15:08:15.210957 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:08:15.210975 | orchestrator | Thursday 16 January 2025 15:04:31 +0000 (0:00:00.496) 0:05:50.277 ****** 2025-01-16 15:08:15.210981 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.210986 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.210991 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.210996 | orchestrator | 2025-01-16 15:08:15.211004 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.211009 | orchestrator | Thursday 16 January 2025 15:04:32 +0000 (0:00:00.277) 0:05:50.554 ****** 2025-01-16 15:08:15.211014 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211019 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211024 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211028 | orchestrator | 2025-01-16 15:08:15.211033 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:08:15.211038 | orchestrator | Thursday 16 January 2025 15:04:32 +0000 (0:00:00.299) 0:05:50.854 ****** 2025-01-16 15:08:15.211043 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.211048 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.211053 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211058 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211063 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.211067 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211072 | orchestrator | 2025-01-16 15:08:15.211077 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:08:15.211082 | orchestrator | Thursday 16 January 2025 15:04:33 +0000 (0:00:01.084) 0:05:51.939 ****** 2025-01-16 15:08:15.211090 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.211095 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211103 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.211108 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211113 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.211117 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211122 | orchestrator | 2025-01-16 15:08:15.211127 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:08:15.211132 | orchestrator | Thursday 16 January 2025 15:04:33 +0000 (0:00:00.396) 0:05:52.335 ****** 2025-01-16 15:08:15.211137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.211142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.211147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.211152 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211157 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-01-16 15:08:15.211162 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-01-16 15:08:15.211166 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-01-16 15:08:15.211171 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211176 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-01-16 15:08:15.211181 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-01-16 15:08:15.211186 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-01-16 15:08:15.211191 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211195 | orchestrator | 2025-01-16 15:08:15.211200 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-01-16 15:08:15.211205 | orchestrator | Thursday 16 January 2025 15:04:34 +0000 (0:00:00.609) 0:05:52.945 ****** 2025-01-16 15:08:15.211210 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211215 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211220 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211224 | orchestrator | 2025-01-16 15:08:15.211229 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-01-16 15:08:15.211234 | orchestrator | Thursday 16 January 2025 15:04:34 +0000 (0:00:00.570) 0:05:53.515 ****** 2025-01-16 15:08:15.211239 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.211247 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211252 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-01-16 15:08:15.211257 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211262 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-01-16 15:08:15.211267 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211271 | orchestrator | 2025-01-16 15:08:15.211276 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-01-16 15:08:15.211281 | orchestrator | Thursday 16 January 2025 15:04:35 +0000 (0:00:00.400) 0:05:53.915 ****** 2025-01-16 15:08:15.211286 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211291 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211296 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211300 | orchestrator | 2025-01-16 15:08:15.211308 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-01-16 15:08:15.211313 | orchestrator | Thursday 16 January 2025 15:04:35 +0000 (0:00:00.505) 0:05:54.421 ****** 2025-01-16 15:08:15.211317 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211322 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211327 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211332 | orchestrator | 2025-01-16 15:08:15.211337 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-01-16 15:08:15.211342 | orchestrator | Thursday 16 January 2025 15:04:36 +0000 (0:00:00.475) 0:05:54.897 ****** 2025-01-16 15:08:15.211347 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.211351 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.211356 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.211361 | orchestrator | 2025-01-16 15:08:15.211366 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-01-16 15:08:15.211383 | orchestrator | Thursday 16 January 2025 15:04:36 +0000 (0:00:00.369) 0:05:55.266 ****** 2025-01-16 15:08:15.211388 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-01-16 15:08:15.211393 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:08:15.211398 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:08:15.211403 | orchestrator | 2025-01-16 15:08:15.211408 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-01-16 15:08:15.211413 | orchestrator | Thursday 16 January 2025 15:04:37 +0000 (0:00:00.467) 0:05:55.734 ****** 2025-01-16 15:08:15.211417 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.211422 | orchestrator | 2025-01-16 15:08:15.211427 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-01-16 15:08:15.211432 | orchestrator | Thursday 16 January 2025 15:04:37 +0000 (0:00:00.365) 0:05:56.099 ****** 2025-01-16 15:08:15.211437 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211442 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211446 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211451 | orchestrator | 2025-01-16 15:08:15.211456 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-01-16 15:08:15.211461 | orchestrator | Thursday 16 January 2025 15:04:37 +0000 (0:00:00.198) 0:05:56.297 ****** 2025-01-16 15:08:15.211466 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211471 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211476 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211481 | orchestrator | 2025-01-16 15:08:15.211486 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-01-16 15:08:15.211491 | orchestrator | Thursday 16 January 2025 15:04:38 +0000 (0:00:00.349) 0:05:56.647 ****** 2025-01-16 15:08:15.211495 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211500 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211505 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211515 | orchestrator | 2025-01-16 15:08:15.211520 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-01-16 15:08:15.211525 | orchestrator | Thursday 16 January 2025 15:04:38 +0000 (0:00:00.205) 0:05:56.853 ****** 2025-01-16 15:08:15.211530 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211535 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211540 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211545 | orchestrator | 2025-01-16 15:08:15.211550 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-01-16 15:08:15.211567 | orchestrator | Thursday 16 January 2025 15:04:38 +0000 (0:00:00.218) 0:05:57.071 ****** 2025-01-16 15:08:15.211573 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.211577 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.211582 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.211587 | orchestrator | 2025-01-16 15:08:15.211592 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-01-16 15:08:15.211597 | orchestrator | Thursday 16 January 2025 15:04:38 +0000 (0:00:00.440) 0:05:57.512 ****** 2025-01-16 15:08:15.211602 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.211607 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.211612 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.211617 | orchestrator | 2025-01-16 15:08:15.211622 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-01-16 15:08:15.211627 | orchestrator | Thursday 16 January 2025 15:04:39 +0000 (0:00:00.480) 0:05:57.993 ****** 2025-01-16 15:08:15.211632 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-01-16 15:08:15.211636 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-01-16 15:08:15.211641 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-01-16 15:08:15.211646 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-01-16 15:08:15.211651 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-01-16 15:08:15.211656 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-01-16 15:08:15.211661 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-01-16 15:08:15.211666 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-01-16 15:08:15.211670 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-01-16 15:08:15.211675 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-01-16 15:08:15.211683 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-01-16 15:08:15.211688 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-01-16 15:08:15.211692 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-01-16 15:08:15.211697 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-01-16 15:08:15.211702 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-01-16 15:08:15.211707 | orchestrator | 2025-01-16 15:08:15.211712 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-01-16 15:08:15.211717 | orchestrator | Thursday 16 January 2025 15:04:41 +0000 (0:00:02.503) 0:06:00.497 ****** 2025-01-16 15:08:15.211722 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211727 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211732 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211739 | orchestrator | 2025-01-16 15:08:15.211757 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-01-16 15:08:15.211762 | orchestrator | Thursday 16 January 2025 15:04:42 +0000 (0:00:00.211) 0:06:00.708 ****** 2025-01-16 15:08:15.211771 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.211776 | orchestrator | 2025-01-16 15:08:15.211781 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-01-16 15:08:15.211785 | orchestrator | Thursday 16 January 2025 15:04:42 +0000 (0:00:00.530) 0:06:01.239 ****** 2025-01-16 15:08:15.211790 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-01-16 15:08:15.211795 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-01-16 15:08:15.211800 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-01-16 15:08:15.211805 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-01-16 15:08:15.211810 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-01-16 15:08:15.211815 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-01-16 15:08:15.211820 | orchestrator | 2025-01-16 15:08:15.211825 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-01-16 15:08:15.211829 | orchestrator | Thursday 16 January 2025 15:04:43 +0000 (0:00:00.864) 0:06:02.103 ****** 2025-01-16 15:08:15.211834 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:08:15.211839 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.211844 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-01-16 15:08:15.211849 | orchestrator | 2025-01-16 15:08:15.211853 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-01-16 15:08:15.211858 | orchestrator | Thursday 16 January 2025 15:04:44 +0000 (0:00:01.347) 0:06:03.451 ****** 2025-01-16 15:08:15.211863 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-01-16 15:08:15.211868 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.211873 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.211878 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-01-16 15:08:15.211883 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-01-16 15:08:15.211887 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.211892 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-01-16 15:08:15.211897 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-01-16 15:08:15.211902 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.211907 | orchestrator | 2025-01-16 15:08:15.211911 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-01-16 15:08:15.211916 | orchestrator | Thursday 16 January 2025 15:04:45 +0000 (0:00:00.957) 0:06:04.409 ****** 2025-01-16 15:08:15.211921 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.211926 | orchestrator | 2025-01-16 15:08:15.211931 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-01-16 15:08:15.211936 | orchestrator | Thursday 16 January 2025 15:04:47 +0000 (0:00:01.639) 0:06:06.049 ****** 2025-01-16 15:08:15.211940 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.211945 | orchestrator | 2025-01-16 15:08:15.211950 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-01-16 15:08:15.211955 | orchestrator | Thursday 16 January 2025 15:04:47 +0000 (0:00:00.455) 0:06:06.505 ****** 2025-01-16 15:08:15.211960 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211968 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.211972 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.211977 | orchestrator | 2025-01-16 15:08:15.211982 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-01-16 15:08:15.211987 | orchestrator | Thursday 16 January 2025 15:04:48 +0000 (0:00:00.312) 0:06:06.817 ****** 2025-01-16 15:08:15.211992 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.211997 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212006 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.212011 | orchestrator | 2025-01-16 15:08:15.212016 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-01-16 15:08:15.212020 | orchestrator | Thursday 16 January 2025 15:04:48 +0000 (0:00:00.211) 0:06:07.029 ****** 2025-01-16 15:08:15.212025 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212030 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212038 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.212043 | orchestrator | 2025-01-16 15:08:15.212048 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-01-16 15:08:15.212053 | orchestrator | Thursday 16 January 2025 15:04:48 +0000 (0:00:00.206) 0:06:07.236 ****** 2025-01-16 15:08:15.212058 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.212063 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.212068 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.212073 | orchestrator | 2025-01-16 15:08:15.212078 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-01-16 15:08:15.212083 | orchestrator | Thursday 16 January 2025 15:04:48 +0000 (0:00:00.293) 0:06:07.529 ****** 2025-01-16 15:08:15.212088 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.212092 | orchestrator | 2025-01-16 15:08:15.212097 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-01-16 15:08:15.212102 | orchestrator | Thursday 16 January 2025 15:04:49 +0000 (0:00:00.583) 0:06:08.113 ****** 2025-01-16 15:08:15.212107 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-53007ac5-07c2-53cd-add6-e57729925218', 'data_vg': 'ceph-53007ac5-07c2-53cd-add6-e57729925218'}) 2025-01-16 15:08:15.212125 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02', 'data_vg': 'ceph-d9c27d09-d80a-5255-9afb-1d5e2e5f2f02'}) 2025-01-16 15:08:15.212131 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-53488163-bd74-50cc-bfa0-f1a94ed01f33', 'data_vg': 'ceph-53488163-bd74-50cc-bfa0-f1a94ed01f33'}) 2025-01-16 15:08:15.212136 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-54c8019f-0033-5b40-9c4f-7f2e43f78b89', 'data_vg': 'ceph-54c8019f-0033-5b40-9c4f-7f2e43f78b89'}) 2025-01-16 15:08:15.212141 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e6463fb-b573-5867-8a5d-b884b3259bdd', 'data_vg': 'ceph-9e6463fb-b573-5867-8a5d-b884b3259bdd'}) 2025-01-16 15:08:15.212146 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-562c7eeb-0cc2-5747-a030-082dcf3dd7cc', 'data_vg': 'ceph-562c7eeb-0cc2-5747-a030-082dcf3dd7cc'}) 2025-01-16 15:08:15.212151 | orchestrator | 2025-01-16 15:08:15.212156 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-01-16 15:08:15.212161 | orchestrator | Thursday 16 January 2025 15:05:15 +0000 (0:00:25.978) 0:06:34.091 ****** 2025-01-16 15:08:15.212165 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212170 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212175 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.212180 | orchestrator | 2025-01-16 15:08:15.212185 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-01-16 15:08:15.212190 | orchestrator | Thursday 16 January 2025 15:05:15 +0000 (0:00:00.286) 0:06:34.378 ****** 2025-01-16 15:08:15.212195 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.212199 | orchestrator | 2025-01-16 15:08:15.212204 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-01-16 15:08:15.212209 | orchestrator | Thursday 16 January 2025 15:05:16 +0000 (0:00:00.369) 0:06:34.748 ****** 2025-01-16 15:08:15.212214 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.212219 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.212224 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.212233 | orchestrator | 2025-01-16 15:08:15.212240 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-01-16 15:08:15.212245 | orchestrator | Thursday 16 January 2025 15:05:16 +0000 (0:00:00.420) 0:06:35.169 ****** 2025-01-16 15:08:15.212250 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.212255 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.212260 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.212265 | orchestrator | 2025-01-16 15:08:15.212270 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-01-16 15:08:15.212274 | orchestrator | Thursday 16 January 2025 15:05:17 +0000 (0:00:01.070) 0:06:36.239 ****** 2025-01-16 15:08:15.212279 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.212284 | orchestrator | 2025-01-16 15:08:15.212289 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-01-16 15:08:15.212294 | orchestrator | Thursday 16 January 2025 15:05:18 +0000 (0:00:00.360) 0:06:36.600 ****** 2025-01-16 15:08:15.212298 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.212303 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.212308 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.212313 | orchestrator | 2025-01-16 15:08:15.212318 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-01-16 15:08:15.212322 | orchestrator | Thursday 16 January 2025 15:05:18 +0000 (0:00:00.767) 0:06:37.367 ****** 2025-01-16 15:08:15.212327 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.212332 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.212337 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.212342 | orchestrator | 2025-01-16 15:08:15.212347 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-01-16 15:08:15.212352 | orchestrator | Thursday 16 January 2025 15:05:19 +0000 (0:00:00.899) 0:06:38.266 ****** 2025-01-16 15:08:15.212356 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.212361 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.212366 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.212371 | orchestrator | 2025-01-16 15:08:15.212376 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-01-16 15:08:15.212381 | orchestrator | Thursday 16 January 2025 15:05:20 +0000 (0:00:01.254) 0:06:39.521 ****** 2025-01-16 15:08:15.212385 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212390 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212395 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.212400 | orchestrator | 2025-01-16 15:08:15.212405 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-01-16 15:08:15.212410 | orchestrator | Thursday 16 January 2025 15:05:21 +0000 (0:00:00.219) 0:06:39.741 ****** 2025-01-16 15:08:15.212414 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212419 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212424 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.212429 | orchestrator | 2025-01-16 15:08:15.212434 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-01-16 15:08:15.212439 | orchestrator | Thursday 16 January 2025 15:05:21 +0000 (0:00:00.330) 0:06:40.071 ****** 2025-01-16 15:08:15.212443 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-01-16 15:08:15.212448 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-01-16 15:08:15.212453 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-01-16 15:08:15.212458 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-01-16 15:08:15.212463 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-01-16 15:08:15.212468 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-01-16 15:08:15.212472 | orchestrator | 2025-01-16 15:08:15.212477 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-01-16 15:08:15.212494 | orchestrator | Thursday 16 January 2025 15:05:22 +0000 (0:00:00.698) 0:06:40.770 ****** 2025-01-16 15:08:15.212499 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-01-16 15:08:15.212509 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-01-16 15:08:15.212514 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-01-16 15:08:15.212519 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-01-16 15:08:15.212524 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-01-16 15:08:15.212529 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-01-16 15:08:15.212534 | orchestrator | 2025-01-16 15:08:15.212539 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-01-16 15:08:15.212543 | orchestrator | Thursday 16 January 2025 15:05:24 +0000 (0:00:02.464) 0:06:43.235 ****** 2025-01-16 15:08:15.212548 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212588 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212594 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.212599 | orchestrator | 2025-01-16 15:08:15.212604 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-01-16 15:08:15.212609 | orchestrator | Thursday 16 January 2025 15:05:26 +0000 (0:00:01.922) 0:06:45.157 ****** 2025-01-16 15:08:15.212614 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212619 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212624 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-01-16 15:08:15.212629 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.212633 | orchestrator | 2025-01-16 15:08:15.212638 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-01-16 15:08:15.212643 | orchestrator | Thursday 16 January 2025 15:05:38 +0000 (0:00:11.676) 0:06:56.833 ****** 2025-01-16 15:08:15.212648 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212653 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212658 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.212662 | orchestrator | 2025-01-16 15:08:15.212667 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-01-16 15:08:15.212672 | orchestrator | Thursday 16 January 2025 15:05:38 +0000 (0:00:00.429) 0:06:57.263 ****** 2025-01-16 15:08:15.212677 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212682 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212687 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.212692 | orchestrator | 2025-01-16 15:08:15.212696 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-01-16 15:08:15.212701 | orchestrator | Thursday 16 January 2025 15:05:39 +0000 (0:00:00.811) 0:06:58.074 ****** 2025-01-16 15:08:15.212706 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.212711 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.212716 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.212721 | orchestrator | 2025-01-16 15:08:15.212725 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-01-16 15:08:15.212730 | orchestrator | Thursday 16 January 2025 15:05:40 +0000 (0:00:00.518) 0:06:58.593 ****** 2025-01-16 15:08:15.212735 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.212742 | orchestrator | 2025-01-16 15:08:15.212750 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-01-16 15:08:15.212758 | orchestrator | Thursday 16 January 2025 15:05:40 +0000 (0:00:00.588) 0:06:59.182 ****** 2025-01-16 15:08:15.212766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.212779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.212787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.212796 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212805 | orchestrator | 2025-01-16 15:08:15.212810 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-01-16 15:08:15.212814 | orchestrator | Thursday 16 January 2025 15:05:40 +0000 (0:00:00.310) 0:06:59.493 ****** 2025-01-16 15:08:15.212825 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212830 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212837 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.212842 | orchestrator | 2025-01-16 15:08:15.212847 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-01-16 15:08:15.212851 | orchestrator | Thursday 16 January 2025 15:05:41 +0000 (0:00:00.256) 0:06:59.749 ****** 2025-01-16 15:08:15.212856 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212861 | orchestrator | 2025-01-16 15:08:15.212866 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-01-16 15:08:15.212871 | orchestrator | Thursday 16 January 2025 15:05:41 +0000 (0:00:00.168) 0:06:59.918 ****** 2025-01-16 15:08:15.212875 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212880 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.212885 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.212890 | orchestrator | 2025-01-16 15:08:15.212895 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-01-16 15:08:15.212899 | orchestrator | Thursday 16 January 2025 15:05:41 +0000 (0:00:00.368) 0:07:00.286 ****** 2025-01-16 15:08:15.212904 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212909 | orchestrator | 2025-01-16 15:08:15.212914 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-01-16 15:08:15.212918 | orchestrator | Thursday 16 January 2025 15:05:41 +0000 (0:00:00.151) 0:07:00.438 ****** 2025-01-16 15:08:15.212923 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212928 | orchestrator | 2025-01-16 15:08:15.212933 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-01-16 15:08:15.212938 | orchestrator | Thursday 16 January 2025 15:05:42 +0000 (0:00:00.183) 0:07:00.621 ****** 2025-01-16 15:08:15.212942 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212947 | orchestrator | 2025-01-16 15:08:15.212952 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-01-16 15:08:15.212957 | orchestrator | Thursday 16 January 2025 15:05:42 +0000 (0:00:00.073) 0:07:00.694 ****** 2025-01-16 15:08:15.212978 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.212984 | orchestrator | 2025-01-16 15:08:15.212990 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-01-16 15:08:15.212995 | orchestrator | Thursday 16 January 2025 15:05:42 +0000 (0:00:00.174) 0:07:00.868 ****** 2025-01-16 15:08:15.213000 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213005 | orchestrator | 2025-01-16 15:08:15.213010 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-01-16 15:08:15.213016 | orchestrator | Thursday 16 January 2025 15:05:42 +0000 (0:00:00.150) 0:07:01.019 ****** 2025-01-16 15:08:15.213021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.213026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.213031 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.213036 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213042 | orchestrator | 2025-01-16 15:08:15.213047 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-01-16 15:08:15.213052 | orchestrator | Thursday 16 January 2025 15:05:42 +0000 (0:00:00.288) 0:07:01.308 ****** 2025-01-16 15:08:15.213057 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213062 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213067 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213073 | orchestrator | 2025-01-16 15:08:15.213078 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-01-16 15:08:15.213083 | orchestrator | Thursday 16 January 2025 15:05:43 +0000 (0:00:00.251) 0:07:01.560 ****** 2025-01-16 15:08:15.213088 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213093 | orchestrator | 2025-01-16 15:08:15.213098 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-01-16 15:08:15.213104 | orchestrator | Thursday 16 January 2025 15:05:43 +0000 (0:00:00.166) 0:07:01.727 ****** 2025-01-16 15:08:15.213112 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213117 | orchestrator | 2025-01-16 15:08:15.213122 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-01-16 15:08:15.213128 | orchestrator | Thursday 16 January 2025 15:05:43 +0000 (0:00:00.167) 0:07:01.894 ****** 2025-01-16 15:08:15.213133 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.213138 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.213143 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.213148 | orchestrator | 2025-01-16 15:08:15.213153 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-01-16 15:08:15.213159 | orchestrator | 2025-01-16 15:08:15.213164 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-01-16 15:08:15.213169 | orchestrator | Thursday 16 January 2025 15:05:45 +0000 (0:00:02.311) 0:07:04.206 ****** 2025-01-16 15:08:15.213174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.213181 | orchestrator | 2025-01-16 15:08:15.213186 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-01-16 15:08:15.213191 | orchestrator | Thursday 16 January 2025 15:05:46 +0000 (0:00:00.912) 0:07:05.119 ****** 2025-01-16 15:08:15.213197 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213202 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213207 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213212 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.213220 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.213225 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.213230 | orchestrator | 2025-01-16 15:08:15.213236 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-01-16 15:08:15.213243 | orchestrator | Thursday 16 January 2025 15:05:47 +0000 (0:00:00.798) 0:07:05.917 ****** 2025-01-16 15:08:15.213249 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213254 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.213259 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.213264 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.213269 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213274 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213280 | orchestrator | 2025-01-16 15:08:15.213285 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-01-16 15:08:15.213290 | orchestrator | Thursday 16 January 2025 15:05:47 +0000 (0:00:00.604) 0:07:06.521 ****** 2025-01-16 15:08:15.213295 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213300 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.213305 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213311 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.213316 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.213321 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213326 | orchestrator | 2025-01-16 15:08:15.213332 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-01-16 15:08:15.213337 | orchestrator | Thursday 16 January 2025 15:05:48 +0000 (0:00:00.480) 0:07:07.002 ****** 2025-01-16 15:08:15.213342 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213347 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.213352 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.213357 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.213362 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213368 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213373 | orchestrator | 2025-01-16 15:08:15.213378 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-01-16 15:08:15.213383 | orchestrator | Thursday 16 January 2025 15:05:49 +0000 (0:00:00.611) 0:07:07.613 ****** 2025-01-16 15:08:15.213388 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213393 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213402 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213407 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.213412 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.213417 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.213422 | orchestrator | 2025-01-16 15:08:15.213427 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-01-16 15:08:15.213433 | orchestrator | Thursday 16 January 2025 15:05:49 +0000 (0:00:00.774) 0:07:08.388 ****** 2025-01-16 15:08:15.213438 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213455 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213464 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213469 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213475 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213480 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213485 | orchestrator | 2025-01-16 15:08:15.213490 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-01-16 15:08:15.213496 | orchestrator | Thursday 16 January 2025 15:05:50 +0000 (0:00:00.635) 0:07:09.024 ****** 2025-01-16 15:08:15.213501 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213506 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213511 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213516 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213522 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213527 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213532 | orchestrator | 2025-01-16 15:08:15.213537 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-01-16 15:08:15.213543 | orchestrator | Thursday 16 January 2025 15:05:50 +0000 (0:00:00.439) 0:07:09.463 ****** 2025-01-16 15:08:15.213548 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213566 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213572 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213577 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213582 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213586 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213591 | orchestrator | 2025-01-16 15:08:15.213596 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-01-16 15:08:15.213601 | orchestrator | Thursday 16 January 2025 15:05:51 +0000 (0:00:00.588) 0:07:10.052 ****** 2025-01-16 15:08:15.213606 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213611 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213616 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213621 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213625 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213630 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213635 | orchestrator | 2025-01-16 15:08:15.213640 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-01-16 15:08:15.213645 | orchestrator | Thursday 16 January 2025 15:05:51 +0000 (0:00:00.431) 0:07:10.484 ****** 2025-01-16 15:08:15.213650 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213655 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213659 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213664 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213669 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213674 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213679 | orchestrator | 2025-01-16 15:08:15.213684 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-01-16 15:08:15.213689 | orchestrator | Thursday 16 January 2025 15:05:52 +0000 (0:00:00.619) 0:07:11.104 ****** 2025-01-16 15:08:15.213693 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.213698 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.213703 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.213708 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.213713 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.213721 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.213726 | orchestrator | 2025-01-16 15:08:15.213730 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-01-16 15:08:15.213735 | orchestrator | Thursday 16 January 2025 15:05:53 +0000 (0:00:00.937) 0:07:12.041 ****** 2025-01-16 15:08:15.213740 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213745 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213750 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213755 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213760 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213765 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213769 | orchestrator | 2025-01-16 15:08:15.213774 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-01-16 15:08:15.213779 | orchestrator | Thursday 16 January 2025 15:05:54 +0000 (0:00:00.619) 0:07:12.661 ****** 2025-01-16 15:08:15.213784 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213789 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213794 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213798 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.213803 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.213808 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.213813 | orchestrator | 2025-01-16 15:08:15.213818 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-01-16 15:08:15.213823 | orchestrator | Thursday 16 January 2025 15:05:54 +0000 (0:00:00.447) 0:07:13.108 ****** 2025-01-16 15:08:15.213828 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.213833 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.213837 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.213842 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213847 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213854 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213859 | orchestrator | 2025-01-16 15:08:15.213864 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-01-16 15:08:15.213869 | orchestrator | Thursday 16 January 2025 15:05:55 +0000 (0:00:00.651) 0:07:13.760 ****** 2025-01-16 15:08:15.213874 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.213879 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.213883 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.213888 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213893 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213898 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213903 | orchestrator | 2025-01-16 15:08:15.213908 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-01-16 15:08:15.213912 | orchestrator | Thursday 16 January 2025 15:05:55 +0000 (0:00:00.527) 0:07:14.287 ****** 2025-01-16 15:08:15.213917 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.213922 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.213927 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.213932 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213936 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213941 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.213946 | orchestrator | 2025-01-16 15:08:15.213951 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-01-16 15:08:15.213970 | orchestrator | Thursday 16 January 2025 15:05:56 +0000 (0:00:00.704) 0:07:14.991 ****** 2025-01-16 15:08:15.213975 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.213980 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.213985 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.213990 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.213995 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.213999 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214004 | orchestrator | 2025-01-16 15:08:15.214012 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-01-16 15:08:15.214035 | orchestrator | Thursday 16 January 2025 15:05:56 +0000 (0:00:00.517) 0:07:15.509 ****** 2025-01-16 15:08:15.214043 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214048 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214053 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214058 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214062 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214067 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214072 | orchestrator | 2025-01-16 15:08:15.214077 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-01-16 15:08:15.214082 | orchestrator | Thursday 16 January 2025 15:05:57 +0000 (0:00:00.550) 0:07:16.059 ****** 2025-01-16 15:08:15.214087 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214091 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214096 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214101 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.214106 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.214111 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.214116 | orchestrator | 2025-01-16 15:08:15.214121 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-01-16 15:08:15.214125 | orchestrator | Thursday 16 January 2025 15:05:57 +0000 (0:00:00.446) 0:07:16.505 ****** 2025-01-16 15:08:15.214130 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.214135 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.214140 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.214145 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.214150 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.214155 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.214159 | orchestrator | 2025-01-16 15:08:15.214164 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-01-16 15:08:15.214169 | orchestrator | Thursday 16 January 2025 15:05:58 +0000 (0:00:00.633) 0:07:17.139 ****** 2025-01-16 15:08:15.214174 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214179 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214184 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214189 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214194 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214198 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214203 | orchestrator | 2025-01-16 15:08:15.214208 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-01-16 15:08:15.214213 | orchestrator | Thursday 16 January 2025 15:05:59 +0000 (0:00:00.451) 0:07:17.591 ****** 2025-01-16 15:08:15.214218 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214222 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214227 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214232 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214237 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214242 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214247 | orchestrator | 2025-01-16 15:08:15.214251 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-01-16 15:08:15.214256 | orchestrator | Thursday 16 January 2025 15:05:59 +0000 (0:00:00.568) 0:07:18.160 ****** 2025-01-16 15:08:15.214261 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214266 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214271 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214278 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214283 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214288 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214293 | orchestrator | 2025-01-16 15:08:15.214298 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-01-16 15:08:15.214303 | orchestrator | Thursday 16 January 2025 15:06:00 +0000 (0:00:00.415) 0:07:18.576 ****** 2025-01-16 15:08:15.214308 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214313 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214317 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214325 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214330 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214335 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214340 | orchestrator | 2025-01-16 15:08:15.214345 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-01-16 15:08:15.214350 | orchestrator | Thursday 16 January 2025 15:06:00 +0000 (0:00:00.539) 0:07:19.116 ****** 2025-01-16 15:08:15.214355 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214359 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214364 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214369 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214374 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214379 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214383 | orchestrator | 2025-01-16 15:08:15.214388 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-01-16 15:08:15.214393 | orchestrator | Thursday 16 January 2025 15:06:00 +0000 (0:00:00.414) 0:07:19.531 ****** 2025-01-16 15:08:15.214398 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214402 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214407 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214412 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214417 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214422 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214426 | orchestrator | 2025-01-16 15:08:15.214431 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-01-16 15:08:15.214436 | orchestrator | Thursday 16 January 2025 15:06:01 +0000 (0:00:00.523) 0:07:20.054 ****** 2025-01-16 15:08:15.214441 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214445 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214450 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214455 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214460 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214477 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214483 | orchestrator | 2025-01-16 15:08:15.214488 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-01-16 15:08:15.214493 | orchestrator | Thursday 16 January 2025 15:06:01 +0000 (0:00:00.397) 0:07:20.451 ****** 2025-01-16 15:08:15.214498 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214503 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214508 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214513 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214517 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214523 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214527 | orchestrator | 2025-01-16 15:08:15.214532 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-01-16 15:08:15.214537 | orchestrator | Thursday 16 January 2025 15:06:02 +0000 (0:00:00.536) 0:07:20.988 ****** 2025-01-16 15:08:15.214542 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214547 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214565 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214574 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214580 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214584 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214589 | orchestrator | 2025-01-16 15:08:15.214594 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-01-16 15:08:15.214599 | orchestrator | Thursday 16 January 2025 15:06:02 +0000 (0:00:00.404) 0:07:21.393 ****** 2025-01-16 15:08:15.214604 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214609 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214614 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214618 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214623 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214633 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214638 | orchestrator | 2025-01-16 15:08:15.214643 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-01-16 15:08:15.214648 | orchestrator | Thursday 16 January 2025 15:06:03 +0000 (0:00:00.533) 0:07:21.927 ****** 2025-01-16 15:08:15.214653 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214657 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214662 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214667 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214672 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214679 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214684 | orchestrator | 2025-01-16 15:08:15.214689 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-01-16 15:08:15.214694 | orchestrator | Thursday 16 January 2025 15:06:03 +0000 (0:00:00.418) 0:07:22.345 ****** 2025-01-16 15:08:15.214699 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214703 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214708 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214713 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214718 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214723 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214728 | orchestrator | 2025-01-16 15:08:15.214733 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-01-16 15:08:15.214737 | orchestrator | Thursday 16 January 2025 15:06:04 +0000 (0:00:00.534) 0:07:22.880 ****** 2025-01-16 15:08:15.214742 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.214747 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.214752 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214757 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.214762 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.214767 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214771 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.214776 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.214781 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214786 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.214791 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-01-16 15:08:15.214796 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214801 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.214807 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-01-16 15:08:15.214814 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214823 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.214831 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-01-16 15:08:15.214838 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214845 | orchestrator | 2025-01-16 15:08:15.214853 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-01-16 15:08:15.214860 | orchestrator | Thursday 16 January 2025 15:06:04 +0000 (0:00:00.462) 0:07:23.343 ****** 2025-01-16 15:08:15.214867 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-01-16 15:08:15.214875 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-01-16 15:08:15.214883 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.214890 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-01-16 15:08:15.214895 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-01-16 15:08:15.214899 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.214904 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-01-16 15:08:15.214909 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-01-16 15:08:15.214914 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.214918 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-01-16 15:08:15.214927 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-01-16 15:08:15.214932 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.214937 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-01-16 15:08:15.214942 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-01-16 15:08:15.214947 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.214967 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-01-16 15:08:15.214973 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-01-16 15:08:15.214978 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.214983 | orchestrator | 2025-01-16 15:08:15.214988 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-01-16 15:08:15.214993 | orchestrator | Thursday 16 January 2025 15:06:05 +0000 (0:00:00.571) 0:07:23.914 ****** 2025-01-16 15:08:15.214998 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215003 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215008 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215012 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215017 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215022 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215027 | orchestrator | 2025-01-16 15:08:15.215035 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-01-16 15:08:15.215040 | orchestrator | Thursday 16 January 2025 15:06:05 +0000 (0:00:00.430) 0:07:24.345 ****** 2025-01-16 15:08:15.215045 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215050 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215055 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215060 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215064 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215069 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215074 | orchestrator | 2025-01-16 15:08:15.215079 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:08:15.215084 | orchestrator | Thursday 16 January 2025 15:06:06 +0000 (0:00:00.551) 0:07:24.897 ****** 2025-01-16 15:08:15.215089 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215094 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215099 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215104 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215109 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215114 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215118 | orchestrator | 2025-01-16 15:08:15.215123 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:08:15.215128 | orchestrator | Thursday 16 January 2025 15:06:06 +0000 (0:00:00.415) 0:07:25.312 ****** 2025-01-16 15:08:15.215133 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215138 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215143 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215147 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215152 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215161 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215166 | orchestrator | 2025-01-16 15:08:15.215171 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:08:15.215176 | orchestrator | Thursday 16 January 2025 15:06:07 +0000 (0:00:00.535) 0:07:25.847 ****** 2025-01-16 15:08:15.215181 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215186 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215191 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215196 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215201 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215205 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215210 | orchestrator | 2025-01-16 15:08:15.215218 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:08:15.215223 | orchestrator | Thursday 16 January 2025 15:06:07 +0000 (0:00:00.428) 0:07:26.276 ****** 2025-01-16 15:08:15.215228 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215233 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215241 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215246 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215250 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215255 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215260 | orchestrator | 2025-01-16 15:08:15.215265 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:08:15.215270 | orchestrator | Thursday 16 January 2025 15:06:08 +0000 (0:00:00.525) 0:07:26.801 ****** 2025-01-16 15:08:15.215275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.215280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.215285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.215290 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215294 | orchestrator | 2025-01-16 15:08:15.215299 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:08:15.215304 | orchestrator | Thursday 16 January 2025 15:06:08 +0000 (0:00:00.291) 0:07:27.093 ****** 2025-01-16 15:08:15.215309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.215314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.215319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.215323 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215328 | orchestrator | 2025-01-16 15:08:15.215333 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:08:15.215338 | orchestrator | Thursday 16 January 2025 15:06:08 +0000 (0:00:00.275) 0:07:27.369 ****** 2025-01-16 15:08:15.215343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.215348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.215353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.215358 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215363 | orchestrator | 2025-01-16 15:08:15.215368 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.215372 | orchestrator | Thursday 16 January 2025 15:06:09 +0000 (0:00:00.299) 0:07:27.668 ****** 2025-01-16 15:08:15.215377 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215382 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215387 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215392 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215409 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215415 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215420 | orchestrator | 2025-01-16 15:08:15.215425 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:08:15.215430 | orchestrator | Thursday 16 January 2025 15:06:09 +0000 (0:00:00.405) 0:07:28.074 ****** 2025-01-16 15:08:15.215435 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.215439 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215444 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.215449 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215454 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.215459 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215464 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.215469 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215474 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.215479 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215484 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.215492 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215497 | orchestrator | 2025-01-16 15:08:15.215502 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:08:15.215506 | orchestrator | Thursday 16 January 2025 15:06:10 +0000 (0:00:00.725) 0:07:28.800 ****** 2025-01-16 15:08:15.215511 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215516 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215521 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215526 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215531 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215536 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215540 | orchestrator | 2025-01-16 15:08:15.215545 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.215550 | orchestrator | Thursday 16 January 2025 15:06:10 +0000 (0:00:00.472) 0:07:29.273 ****** 2025-01-16 15:08:15.215572 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215577 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215582 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215587 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215591 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215596 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215601 | orchestrator | 2025-01-16 15:08:15.215606 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:08:15.215611 | orchestrator | Thursday 16 January 2025 15:06:11 +0000 (0:00:00.591) 0:07:29.865 ****** 2025-01-16 15:08:15.215616 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.215621 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215626 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.215631 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215635 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.215640 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215645 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-01-16 15:08:15.215650 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215655 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-01-16 15:08:15.215660 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215665 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-01-16 15:08:15.215669 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215674 | orchestrator | 2025-01-16 15:08:15.215679 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:08:15.215684 | orchestrator | Thursday 16 January 2025 15:06:12 +0000 (0:00:00.810) 0:07:30.675 ****** 2025-01-16 15:08:15.215689 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.215694 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215699 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.215704 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215709 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.215714 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215719 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215724 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215729 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215734 | orchestrator | 2025-01-16 15:08:15.215739 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:08:15.215743 | orchestrator | Thursday 16 January 2025 15:06:12 +0000 (0:00:00.653) 0:07:31.329 ****** 2025-01-16 15:08:15.215748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.215753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.215761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.215766 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-01-16 15:08:15.215771 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-01-16 15:08:15.215776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-01-16 15:08:15.215780 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215785 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-01-16 15:08:15.215793 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-01-16 15:08:15.215798 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215803 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-01-16 15:08:15.215808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-01-16 15:08:15.215813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-01-16 15:08:15.215817 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-01-16 15:08:15.215822 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215830 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-01-16 15:08:15.215835 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-01-16 15:08:15.215840 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-01-16 15:08:15.215845 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215850 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215854 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-01-16 15:08:15.215859 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-01-16 15:08:15.215864 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-01-16 15:08:15.215869 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215874 | orchestrator | 2025-01-16 15:08:15.215879 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-01-16 15:08:15.215884 | orchestrator | Thursday 16 January 2025 15:06:13 +0000 (0:00:01.108) 0:07:32.437 ****** 2025-01-16 15:08:15.215889 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215894 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215901 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215906 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215911 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215916 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215921 | orchestrator | 2025-01-16 15:08:15.215925 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-01-16 15:08:15.215933 | orchestrator | Thursday 16 January 2025 15:06:14 +0000 (0:00:00.987) 0:07:33.424 ****** 2025-01-16 15:08:15.215938 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.215943 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.215947 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-01-16 15:08:15.215952 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.215957 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-01-16 15:08:15.215962 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.215967 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.215972 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.215977 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.215982 | orchestrator | 2025-01-16 15:08:15.215987 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-01-16 15:08:15.215992 | orchestrator | Thursday 16 January 2025 15:06:15 +0000 (0:00:00.948) 0:07:34.373 ****** 2025-01-16 15:08:15.215997 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216002 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216006 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216011 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.216016 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.216021 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.216029 | orchestrator | 2025-01-16 15:08:15.216034 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-01-16 15:08:15.216039 | orchestrator | Thursday 16 January 2025 15:06:16 +0000 (0:00:00.813) 0:07:35.187 ****** 2025-01-16 15:08:15.216044 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216049 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216054 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216059 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:08:15.216064 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:08:15.216068 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:08:15.216073 | orchestrator | 2025-01-16 15:08:15.216078 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-01-16 15:08:15.216083 | orchestrator | Thursday 16 January 2025 15:06:17 +0000 (0:00:00.800) 0:07:35.987 ****** 2025-01-16 15:08:15.216088 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.216093 | orchestrator | 2025-01-16 15:08:15.216098 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-01-16 15:08:15.216103 | orchestrator | Thursday 16 January 2025 15:06:19 +0000 (0:00:02.389) 0:07:38.377 ****** 2025-01-16 15:08:15.216108 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.216113 | orchestrator | 2025-01-16 15:08:15.216118 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-01-16 15:08:15.216123 | orchestrator | Thursday 16 January 2025 15:06:21 +0000 (0:00:01.167) 0:07:39.544 ****** 2025-01-16 15:08:15.216127 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.216132 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.216137 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.216142 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.216147 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.216152 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.216157 | orchestrator | 2025-01-16 15:08:15.216162 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-01-16 15:08:15.216167 | orchestrator | Thursday 16 January 2025 15:06:22 +0000 (0:00:01.110) 0:07:40.655 ****** 2025-01-16 15:08:15.216172 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.216177 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.216181 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.216186 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.216191 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.216196 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.216201 | orchestrator | 2025-01-16 15:08:15.216206 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-01-16 15:08:15.216211 | orchestrator | Thursday 16 January 2025 15:06:22 +0000 (0:00:00.661) 0:07:41.317 ****** 2025-01-16 15:08:15.216216 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.216221 | orchestrator | 2025-01-16 15:08:15.216226 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-01-16 15:08:15.216231 | orchestrator | Thursday 16 January 2025 15:06:23 +0000 (0:00:00.839) 0:07:42.156 ****** 2025-01-16 15:08:15.216236 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.216241 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.216246 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.216251 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.216261 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.216267 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.216271 | orchestrator | 2025-01-16 15:08:15.216276 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-01-16 15:08:15.216281 | orchestrator | Thursday 16 January 2025 15:06:24 +0000 (0:00:01.067) 0:07:43.223 ****** 2025-01-16 15:08:15.216286 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.216291 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.216300 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.216305 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.216309 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.216314 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.216319 | orchestrator | 2025-01-16 15:08:15.216324 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-01-16 15:08:15.216329 | orchestrator | Thursday 16 January 2025 15:06:27 +0000 (0:00:02.695) 0:07:45.919 ****** 2025-01-16 15:08:15.216334 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:08:15.216339 | orchestrator | 2025-01-16 15:08:15.216344 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-01-16 15:08:15.216349 | orchestrator | Thursday 16 January 2025 15:06:28 +0000 (0:00:00.849) 0:07:46.768 ****** 2025-01-16 15:08:15.216354 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.216359 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.216364 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.216369 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.216374 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.216379 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.216386 | orchestrator | 2025-01-16 15:08:15.216391 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-01-16 15:08:15.216396 | orchestrator | Thursday 16 January 2025 15:06:28 +0000 (0:00:00.540) 0:07:47.308 ****** 2025-01-16 15:08:15.216401 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.216406 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.216411 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.216416 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:08:15.216421 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:08:15.216426 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:08:15.216431 | orchestrator | 2025-01-16 15:08:15.216436 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-01-16 15:08:15.216441 | orchestrator | Thursday 16 January 2025 15:06:30 +0000 (0:00:01.718) 0:07:49.027 ****** 2025-01-16 15:08:15.216446 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.216451 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.216456 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.216460 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:08:15.216465 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:08:15.216470 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:08:15.216475 | orchestrator | 2025-01-16 15:08:15.216480 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-01-16 15:08:15.216485 | orchestrator | 2025-01-16 15:08:15.216490 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-01-16 15:08:15.216495 | orchestrator | Thursday 16 January 2025 15:06:32 +0000 (0:00:01.711) 0:07:50.738 ****** 2025-01-16 15:08:15.216500 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.216505 | orchestrator | 2025-01-16 15:08:15.216509 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-01-16 15:08:15.216514 | orchestrator | Thursday 16 January 2025 15:06:32 +0000 (0:00:00.492) 0:07:51.231 ****** 2025-01-16 15:08:15.216519 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216524 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216529 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216534 | orchestrator | 2025-01-16 15:08:15.216543 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-01-16 15:08:15.216548 | orchestrator | Thursday 16 January 2025 15:06:32 +0000 (0:00:00.203) 0:07:51.434 ****** 2025-01-16 15:08:15.216616 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.216622 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.216628 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.216633 | orchestrator | 2025-01-16 15:08:15.216642 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-01-16 15:08:15.216647 | orchestrator | Thursday 16 January 2025 15:06:33 +0000 (0:00:00.462) 0:07:51.896 ****** 2025-01-16 15:08:15.216652 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.216657 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.216661 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.216666 | orchestrator | 2025-01-16 15:08:15.216671 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-01-16 15:08:15.216676 | orchestrator | Thursday 16 January 2025 15:06:33 +0000 (0:00:00.617) 0:07:52.513 ****** 2025-01-16 15:08:15.216681 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.216686 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.216691 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.216696 | orchestrator | 2025-01-16 15:08:15.216700 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-01-16 15:08:15.216705 | orchestrator | Thursday 16 January 2025 15:06:34 +0000 (0:00:00.455) 0:07:52.969 ****** 2025-01-16 15:08:15.216710 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216715 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216720 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216725 | orchestrator | 2025-01-16 15:08:15.216730 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-01-16 15:08:15.216735 | orchestrator | Thursday 16 January 2025 15:06:34 +0000 (0:00:00.207) 0:07:53.177 ****** 2025-01-16 15:08:15.216740 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216744 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216749 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216754 | orchestrator | 2025-01-16 15:08:15.216759 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-01-16 15:08:15.216764 | orchestrator | Thursday 16 January 2025 15:06:34 +0000 (0:00:00.198) 0:07:53.376 ****** 2025-01-16 15:08:15.216769 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216774 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216782 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216787 | orchestrator | 2025-01-16 15:08:15.216792 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-01-16 15:08:15.216797 | orchestrator | Thursday 16 January 2025 15:06:35 +0000 (0:00:00.355) 0:07:53.731 ****** 2025-01-16 15:08:15.216802 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216807 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216811 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216816 | orchestrator | 2025-01-16 15:08:15.216821 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-01-16 15:08:15.216826 | orchestrator | Thursday 16 January 2025 15:06:35 +0000 (0:00:00.208) 0:07:53.939 ****** 2025-01-16 15:08:15.216831 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216836 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216841 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216846 | orchestrator | 2025-01-16 15:08:15.216850 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-01-16 15:08:15.216855 | orchestrator | Thursday 16 January 2025 15:06:35 +0000 (0:00:00.207) 0:07:54.147 ****** 2025-01-16 15:08:15.216860 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216865 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216870 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216875 | orchestrator | 2025-01-16 15:08:15.216880 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-01-16 15:08:15.216885 | orchestrator | Thursday 16 January 2025 15:06:35 +0000 (0:00:00.198) 0:07:54.345 ****** 2025-01-16 15:08:15.216890 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.216895 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.216900 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.216905 | orchestrator | 2025-01-16 15:08:15.216910 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-01-16 15:08:15.216918 | orchestrator | Thursday 16 January 2025 15:06:36 +0000 (0:00:00.637) 0:07:54.983 ****** 2025-01-16 15:08:15.216923 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216928 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216933 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216938 | orchestrator | 2025-01-16 15:08:15.216945 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-01-16 15:08:15.216953 | orchestrator | Thursday 16 January 2025 15:06:36 +0000 (0:00:00.203) 0:07:55.187 ****** 2025-01-16 15:08:15.216960 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.216968 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.216978 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.216985 | orchestrator | 2025-01-16 15:08:15.216992 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-01-16 15:08:15.217000 | orchestrator | Thursday 16 January 2025 15:06:36 +0000 (0:00:00.215) 0:07:55.403 ****** 2025-01-16 15:08:15.217007 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.217016 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.217025 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.217033 | orchestrator | 2025-01-16 15:08:15.217041 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-01-16 15:08:15.217049 | orchestrator | Thursday 16 January 2025 15:06:37 +0000 (0:00:00.279) 0:07:55.682 ****** 2025-01-16 15:08:15.217057 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.217064 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.217072 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.217080 | orchestrator | 2025-01-16 15:08:15.217088 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-01-16 15:08:15.217096 | orchestrator | Thursday 16 January 2025 15:06:37 +0000 (0:00:00.429) 0:07:56.112 ****** 2025-01-16 15:08:15.217103 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.217111 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.217119 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.217127 | orchestrator | 2025-01-16 15:08:15.217135 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-01-16 15:08:15.217146 | orchestrator | Thursday 16 January 2025 15:06:37 +0000 (0:00:00.246) 0:07:56.358 ****** 2025-01-16 15:08:15.217154 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217163 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217170 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217177 | orchestrator | 2025-01-16 15:08:15.217185 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-01-16 15:08:15.217192 | orchestrator | Thursday 16 January 2025 15:06:38 +0000 (0:00:00.227) 0:07:56.586 ****** 2025-01-16 15:08:15.217201 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217209 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217216 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217224 | orchestrator | 2025-01-16 15:08:15.217231 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-01-16 15:08:15.217239 | orchestrator | Thursday 16 January 2025 15:06:38 +0000 (0:00:00.221) 0:07:56.808 ****** 2025-01-16 15:08:15.217247 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217252 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217257 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217262 | orchestrator | 2025-01-16 15:08:15.217267 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-01-16 15:08:15.217272 | orchestrator | Thursday 16 January 2025 15:06:38 +0000 (0:00:00.397) 0:07:57.205 ****** 2025-01-16 15:08:15.217276 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.217281 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.217286 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.217291 | orchestrator | 2025-01-16 15:08:15.217296 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-01-16 15:08:15.217301 | orchestrator | Thursday 16 January 2025 15:06:38 +0000 (0:00:00.254) 0:07:57.459 ****** 2025-01-16 15:08:15.217310 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217315 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217320 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217325 | orchestrator | 2025-01-16 15:08:15.217330 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-01-16 15:08:15.217335 | orchestrator | Thursday 16 January 2025 15:06:39 +0000 (0:00:00.251) 0:07:57.710 ****** 2025-01-16 15:08:15.217340 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217345 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217355 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217360 | orchestrator | 2025-01-16 15:08:15.217365 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-01-16 15:08:15.217370 | orchestrator | Thursday 16 January 2025 15:06:39 +0000 (0:00:00.252) 0:07:57.962 ****** 2025-01-16 15:08:15.217374 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217379 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217384 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217392 | orchestrator | 2025-01-16 15:08:15.217397 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-01-16 15:08:15.217402 | orchestrator | Thursday 16 January 2025 15:06:39 +0000 (0:00:00.439) 0:07:58.402 ****** 2025-01-16 15:08:15.217407 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217412 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217416 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217421 | orchestrator | 2025-01-16 15:08:15.217426 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-01-16 15:08:15.217431 | orchestrator | Thursday 16 January 2025 15:06:40 +0000 (0:00:00.240) 0:07:58.642 ****** 2025-01-16 15:08:15.217436 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217441 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217445 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217450 | orchestrator | 2025-01-16 15:08:15.217455 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-01-16 15:08:15.217460 | orchestrator | Thursday 16 January 2025 15:06:40 +0000 (0:00:00.232) 0:07:58.875 ****** 2025-01-16 15:08:15.217465 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217470 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217474 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217479 | orchestrator | 2025-01-16 15:08:15.217484 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-01-16 15:08:15.217489 | orchestrator | Thursday 16 January 2025 15:06:40 +0000 (0:00:00.220) 0:07:59.095 ****** 2025-01-16 15:08:15.217494 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217499 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217504 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217508 | orchestrator | 2025-01-16 15:08:15.217513 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-01-16 15:08:15.217519 | orchestrator | Thursday 16 January 2025 15:06:41 +0000 (0:00:00.449) 0:07:59.545 ****** 2025-01-16 15:08:15.217524 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217528 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217533 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217538 | orchestrator | 2025-01-16 15:08:15.217543 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-01-16 15:08:15.217548 | orchestrator | Thursday 16 January 2025 15:06:41 +0000 (0:00:00.247) 0:07:59.792 ****** 2025-01-16 15:08:15.217567 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217573 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217578 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217583 | orchestrator | 2025-01-16 15:08:15.217588 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-01-16 15:08:15.217593 | orchestrator | Thursday 16 January 2025 15:06:41 +0000 (0:00:00.249) 0:08:00.042 ****** 2025-01-16 15:08:15.217605 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217610 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217617 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217622 | orchestrator | 2025-01-16 15:08:15.217627 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-01-16 15:08:15.217632 | orchestrator | Thursday 16 January 2025 15:06:41 +0000 (0:00:00.234) 0:08:00.277 ****** 2025-01-16 15:08:15.217637 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217642 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217647 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217652 | orchestrator | 2025-01-16 15:08:15.217657 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-01-16 15:08:15.217661 | orchestrator | Thursday 16 January 2025 15:06:42 +0000 (0:00:00.409) 0:08:00.686 ****** 2025-01-16 15:08:15.217666 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217671 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217676 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217681 | orchestrator | 2025-01-16 15:08:15.217686 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-01-16 15:08:15.217691 | orchestrator | Thursday 16 January 2025 15:06:42 +0000 (0:00:00.221) 0:08:00.908 ****** 2025-01-16 15:08:15.217696 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.217701 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.217706 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.217711 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.217715 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217720 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217725 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.217730 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.217735 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217740 | orchestrator | 2025-01-16 15:08:15.217745 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-01-16 15:08:15.217749 | orchestrator | Thursday 16 January 2025 15:06:42 +0000 (0:00:00.266) 0:08:01.175 ****** 2025-01-16 15:08:15.217754 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-01-16 15:08:15.217759 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-01-16 15:08:15.217764 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-01-16 15:08:15.217769 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-01-16 15:08:15.217774 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217779 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217784 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-01-16 15:08:15.217789 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-01-16 15:08:15.217796 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217802 | orchestrator | 2025-01-16 15:08:15.217807 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-01-16 15:08:15.217814 | orchestrator | Thursday 16 January 2025 15:06:42 +0000 (0:00:00.290) 0:08:01.466 ****** 2025-01-16 15:08:15.217819 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217824 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217829 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217833 | orchestrator | 2025-01-16 15:08:15.217838 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-01-16 15:08:15.217844 | orchestrator | Thursday 16 January 2025 15:06:43 +0000 (0:00:00.421) 0:08:01.887 ****** 2025-01-16 15:08:15.217848 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217853 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217858 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217863 | orchestrator | 2025-01-16 15:08:15.217868 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:08:15.217877 | orchestrator | Thursday 16 January 2025 15:06:43 +0000 (0:00:00.240) 0:08:02.128 ****** 2025-01-16 15:08:15.217882 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217887 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217892 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217897 | orchestrator | 2025-01-16 15:08:15.217901 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:08:15.217906 | orchestrator | Thursday 16 January 2025 15:06:43 +0000 (0:00:00.233) 0:08:02.362 ****** 2025-01-16 15:08:15.217911 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217916 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217921 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217926 | orchestrator | 2025-01-16 15:08:15.217931 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:08:15.217935 | orchestrator | Thursday 16 January 2025 15:06:44 +0000 (0:00:00.241) 0:08:02.603 ****** 2025-01-16 15:08:15.217940 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217945 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217950 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217955 | orchestrator | 2025-01-16 15:08:15.217959 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:08:15.217964 | orchestrator | Thursday 16 January 2025 15:06:44 +0000 (0:00:00.404) 0:08:03.007 ****** 2025-01-16 15:08:15.217969 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.217974 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.217980 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.217987 | orchestrator | 2025-01-16 15:08:15.217995 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:08:15.218003 | orchestrator | Thursday 16 January 2025 15:06:44 +0000 (0:00:00.232) 0:08:03.240 ****** 2025-01-16 15:08:15.218010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.218171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.218179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.218184 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218189 | orchestrator | 2025-01-16 15:08:15.218194 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:08:15.218198 | orchestrator | Thursday 16 January 2025 15:06:44 +0000 (0:00:00.287) 0:08:03.527 ****** 2025-01-16 15:08:15.218203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.218208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.218213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.218218 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218223 | orchestrator | 2025-01-16 15:08:15.218228 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:08:15.218233 | orchestrator | Thursday 16 January 2025 15:06:45 +0000 (0:00:00.287) 0:08:03.814 ****** 2025-01-16 15:08:15.218237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.218242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.218247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.218252 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218257 | orchestrator | 2025-01-16 15:08:15.218262 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.218266 | orchestrator | Thursday 16 January 2025 15:06:45 +0000 (0:00:00.282) 0:08:04.097 ****** 2025-01-16 15:08:15.218271 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218276 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218281 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218286 | orchestrator | 2025-01-16 15:08:15.218291 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:08:15.218300 | orchestrator | Thursday 16 January 2025 15:06:45 +0000 (0:00:00.208) 0:08:04.305 ****** 2025-01-16 15:08:15.218305 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.218310 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218315 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.218320 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218325 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.218329 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218334 | orchestrator | 2025-01-16 15:08:15.218339 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:08:15.218344 | orchestrator | Thursday 16 January 2025 15:06:46 +0000 (0:00:00.485) 0:08:04.791 ****** 2025-01-16 15:08:15.218349 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218354 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218359 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218363 | orchestrator | 2025-01-16 15:08:15.218368 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.218373 | orchestrator | Thursday 16 January 2025 15:06:46 +0000 (0:00:00.208) 0:08:04.999 ****** 2025-01-16 15:08:15.218383 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218388 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218393 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218398 | orchestrator | 2025-01-16 15:08:15.218414 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:08:15.218419 | orchestrator | Thursday 16 January 2025 15:06:46 +0000 (0:00:00.201) 0:08:05.201 ****** 2025-01-16 15:08:15.218424 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.218429 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218434 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.218439 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218444 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.218448 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218453 | orchestrator | 2025-01-16 15:08:15.218458 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:08:15.218463 | orchestrator | Thursday 16 January 2025 15:06:46 +0000 (0:00:00.276) 0:08:05.478 ****** 2025-01-16 15:08:15.218468 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.218473 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218478 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.218483 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218492 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.218497 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218501 | orchestrator | 2025-01-16 15:08:15.218506 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:08:15.218511 | orchestrator | Thursday 16 January 2025 15:06:47 +0000 (0:00:00.362) 0:08:05.840 ****** 2025-01-16 15:08:15.218516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.218521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.218526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.218530 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218535 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-01-16 15:08:15.218540 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-01-16 15:08:15.218545 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-01-16 15:08:15.218550 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218573 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-01-16 15:08:15.218578 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-01-16 15:08:15.218586 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-01-16 15:08:15.218591 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218596 | orchestrator | 2025-01-16 15:08:15.218600 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-01-16 15:08:15.218605 | orchestrator | Thursday 16 January 2025 15:06:47 +0000 (0:00:00.392) 0:08:06.233 ****** 2025-01-16 15:08:15.218610 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218615 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218620 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218624 | orchestrator | 2025-01-16 15:08:15.218634 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-01-16 15:08:15.218639 | orchestrator | Thursday 16 January 2025 15:06:48 +0000 (0:00:00.466) 0:08:06.699 ****** 2025-01-16 15:08:15.218644 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.218649 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218654 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-01-16 15:08:15.218658 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218663 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-01-16 15:08:15.218668 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218673 | orchestrator | 2025-01-16 15:08:15.218678 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-01-16 15:08:15.218683 | orchestrator | Thursday 16 January 2025 15:06:48 +0000 (0:00:00.367) 0:08:07.066 ****** 2025-01-16 15:08:15.218688 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218692 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218697 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218702 | orchestrator | 2025-01-16 15:08:15.218707 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-01-16 15:08:15.218712 | orchestrator | Thursday 16 January 2025 15:06:49 +0000 (0:00:00.496) 0:08:07.563 ****** 2025-01-16 15:08:15.218716 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218721 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218726 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218731 | orchestrator | 2025-01-16 15:08:15.218736 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-01-16 15:08:15.218740 | orchestrator | Thursday 16 January 2025 15:06:49 +0000 (0:00:00.352) 0:08:07.916 ****** 2025-01-16 15:08:15.218745 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.218750 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.218755 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-01-16 15:08:15.218760 | orchestrator | 2025-01-16 15:08:15.218765 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-01-16 15:08:15.218769 | orchestrator | Thursday 16 January 2025 15:06:49 +0000 (0:00:00.256) 0:08:08.172 ****** 2025-01-16 15:08:15.218774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.218779 | orchestrator | 2025-01-16 15:08:15.218784 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-01-16 15:08:15.218789 | orchestrator | Thursday 16 January 2025 15:06:50 +0000 (0:00:01.269) 0:08:09.442 ****** 2025-01-16 15:08:15.218799 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-01-16 15:08:15.218807 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.218811 | orchestrator | 2025-01-16 15:08:15.218816 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-01-16 15:08:15.218821 | orchestrator | Thursday 16 January 2025 15:06:51 +0000 (0:00:00.249) 0:08:09.691 ****** 2025-01-16 15:08:15.218831 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-01-16 15:08:15.218839 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-01-16 15:08:15.218847 | orchestrator | 2025-01-16 15:08:15.218855 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-01-16 15:08:15.218862 | orchestrator | Thursday 16 January 2025 15:06:55 +0000 (0:00:04.568) 0:08:14.260 ****** 2025-01-16 15:08:15.218870 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:08:15.218877 | orchestrator | 2025-01-16 15:08:15.218886 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-01-16 15:08:15.218892 | orchestrator | Thursday 16 January 2025 15:06:57 +0000 (0:00:01.727) 0:08:15.988 ****** 2025-01-16 15:08:15.218896 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.218901 | orchestrator | 2025-01-16 15:08:15.218906 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-01-16 15:08:15.218911 | orchestrator | Thursday 16 January 2025 15:06:57 +0000 (0:00:00.361) 0:08:16.349 ****** 2025-01-16 15:08:15.218916 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-01-16 15:08:15.218921 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-01-16 15:08:15.218926 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-01-16 15:08:15.218931 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-01-16 15:08:15.218936 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-01-16 15:08:15.218940 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-01-16 15:08:15.218945 | orchestrator | 2025-01-16 15:08:15.218950 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-01-16 15:08:15.218955 | orchestrator | Thursday 16 January 2025 15:06:58 +0000 (0:00:00.822) 0:08:17.171 ****** 2025-01-16 15:08:15.218960 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:08:15.218965 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.218970 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-01-16 15:08:15.218975 | orchestrator | 2025-01-16 15:08:15.218979 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-01-16 15:08:15.218984 | orchestrator | Thursday 16 January 2025 15:06:59 +0000 (0:00:01.095) 0:08:18.267 ****** 2025-01-16 15:08:15.218992 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-01-16 15:08:15.218998 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.219002 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.219007 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-01-16 15:08:15.219012 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-01-16 15:08:15.219017 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.219022 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-01-16 15:08:15.219027 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-01-16 15:08:15.219032 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.219037 | orchestrator | 2025-01-16 15:08:15.219042 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-01-16 15:08:15.219047 | orchestrator | Thursday 16 January 2025 15:07:00 +0000 (0:00:00.690) 0:08:18.957 ****** 2025-01-16 15:08:15.219051 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219061 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219066 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219071 | orchestrator | 2025-01-16 15:08:15.219076 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-01-16 15:08:15.219081 | orchestrator | Thursday 16 January 2025 15:07:00 +0000 (0:00:00.202) 0:08:19.160 ****** 2025-01-16 15:08:15.219086 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.219091 | orchestrator | 2025-01-16 15:08:15.219099 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-01-16 15:08:15.219104 | orchestrator | Thursday 16 January 2025 15:07:01 +0000 (0:00:00.489) 0:08:19.649 ****** 2025-01-16 15:08:15.219109 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.219114 | orchestrator | 2025-01-16 15:08:15.219119 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-01-16 15:08:15.219124 | orchestrator | Thursday 16 January 2025 15:07:01 +0000 (0:00:00.350) 0:08:20.000 ****** 2025-01-16 15:08:15.219132 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.219137 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.219142 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.219147 | orchestrator | 2025-01-16 15:08:15.219152 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-01-16 15:08:15.219157 | orchestrator | Thursday 16 January 2025 15:07:02 +0000 (0:00:00.863) 0:08:20.863 ****** 2025-01-16 15:08:15.219161 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.219166 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.219171 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.219176 | orchestrator | 2025-01-16 15:08:15.219181 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-01-16 15:08:15.219185 | orchestrator | Thursday 16 January 2025 15:07:03 +0000 (0:00:00.699) 0:08:21.562 ****** 2025-01-16 15:08:15.219190 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.219195 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.219200 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.219205 | orchestrator | 2025-01-16 15:08:15.219209 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-01-16 15:08:15.219214 | orchestrator | Thursday 16 January 2025 15:07:04 +0000 (0:00:01.256) 0:08:22.818 ****** 2025-01-16 15:08:15.219219 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.219224 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.219228 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.219233 | orchestrator | 2025-01-16 15:08:15.219238 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-01-16 15:08:15.219243 | orchestrator | Thursday 16 January 2025 15:07:05 +0000 (0:00:01.446) 0:08:24.264 ****** 2025-01-16 15:08:15.219248 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-01-16 15:08:15.219253 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-01-16 15:08:15.219257 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-01-16 15:08:15.219262 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.219267 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.219272 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.219277 | orchestrator | 2025-01-16 15:08:15.219282 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-01-16 15:08:15.219287 | orchestrator | Thursday 16 January 2025 15:07:22 +0000 (0:00:16.362) 0:08:40.627 ****** 2025-01-16 15:08:15.219292 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.219296 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.219301 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.219306 | orchestrator | 2025-01-16 15:08:15.219311 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-01-16 15:08:15.219320 | orchestrator | Thursday 16 January 2025 15:07:22 +0000 (0:00:00.449) 0:08:41.076 ****** 2025-01-16 15:08:15.219325 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.219329 | orchestrator | 2025-01-16 15:08:15.219334 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-01-16 15:08:15.219339 | orchestrator | Thursday 16 January 2025 15:07:23 +0000 (0:00:00.475) 0:08:41.552 ****** 2025-01-16 15:08:15.219344 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.219349 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.219354 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.219358 | orchestrator | 2025-01-16 15:08:15.219363 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-01-16 15:08:15.219368 | orchestrator | Thursday 16 January 2025 15:07:23 +0000 (0:00:00.210) 0:08:41.763 ****** 2025-01-16 15:08:15.219373 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.219378 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.219383 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.219388 | orchestrator | 2025-01-16 15:08:15.219392 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-01-16 15:08:15.219397 | orchestrator | Thursday 16 January 2025 15:07:23 +0000 (0:00:00.725) 0:08:42.488 ****** 2025-01-16 15:08:15.219402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.219407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.219412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.219416 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219421 | orchestrator | 2025-01-16 15:08:15.219426 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-01-16 15:08:15.219431 | orchestrator | Thursday 16 January 2025 15:07:24 +0000 (0:00:00.580) 0:08:43.069 ****** 2025-01-16 15:08:15.219436 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.219441 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.219449 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.219454 | orchestrator | 2025-01-16 15:08:15.219459 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-01-16 15:08:15.219464 | orchestrator | Thursday 16 January 2025 15:07:24 +0000 (0:00:00.369) 0:08:43.438 ****** 2025-01-16 15:08:15.219469 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.219474 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.219479 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.219484 | orchestrator | 2025-01-16 15:08:15.219488 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-01-16 15:08:15.219493 | orchestrator | 2025-01-16 15:08:15.219498 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-01-16 15:08:15.219505 | orchestrator | Thursday 16 January 2025 15:07:26 +0000 (0:00:01.390) 0:08:44.829 ****** 2025-01-16 15:08:15.219510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.219515 | orchestrator | 2025-01-16 15:08:15.219520 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-01-16 15:08:15.219525 | orchestrator | Thursday 16 January 2025 15:07:26 +0000 (0:00:00.483) 0:08:45.312 ****** 2025-01-16 15:08:15.219530 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219538 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219543 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219548 | orchestrator | 2025-01-16 15:08:15.219584 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-01-16 15:08:15.219590 | orchestrator | Thursday 16 January 2025 15:07:26 +0000 (0:00:00.205) 0:08:45.518 ****** 2025-01-16 15:08:15.219596 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.219601 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.219606 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.219610 | orchestrator | 2025-01-16 15:08:15.219619 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-01-16 15:08:15.219624 | orchestrator | Thursday 16 January 2025 15:07:27 +0000 (0:00:00.458) 0:08:45.977 ****** 2025-01-16 15:08:15.219629 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.219634 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.219639 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.219644 | orchestrator | 2025-01-16 15:08:15.219649 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-01-16 15:08:15.219654 | orchestrator | Thursday 16 January 2025 15:07:27 +0000 (0:00:00.452) 0:08:46.429 ****** 2025-01-16 15:08:15.219658 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.219663 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.219668 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.219673 | orchestrator | 2025-01-16 15:08:15.219678 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-01-16 15:08:15.219683 | orchestrator | Thursday 16 January 2025 15:07:28 +0000 (0:00:00.638) 0:08:47.067 ****** 2025-01-16 15:08:15.219687 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219692 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219697 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219702 | orchestrator | 2025-01-16 15:08:15.219707 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-01-16 15:08:15.219712 | orchestrator | Thursday 16 January 2025 15:07:28 +0000 (0:00:00.203) 0:08:47.271 ****** 2025-01-16 15:08:15.219717 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219722 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219726 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219731 | orchestrator | 2025-01-16 15:08:15.219736 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-01-16 15:08:15.219741 | orchestrator | Thursday 16 January 2025 15:07:28 +0000 (0:00:00.194) 0:08:47.465 ****** 2025-01-16 15:08:15.219746 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219751 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219756 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219761 | orchestrator | 2025-01-16 15:08:15.219766 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-01-16 15:08:15.219771 | orchestrator | Thursday 16 January 2025 15:07:29 +0000 (0:00:00.200) 0:08:47.666 ****** 2025-01-16 15:08:15.219776 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219780 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219785 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219791 | orchestrator | 2025-01-16 15:08:15.219799 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-01-16 15:08:15.219807 | orchestrator | Thursday 16 January 2025 15:07:29 +0000 (0:00:00.336) 0:08:48.003 ****** 2025-01-16 15:08:15.219815 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219823 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219831 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219838 | orchestrator | 2025-01-16 15:08:15.219846 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-01-16 15:08:15.219854 | orchestrator | Thursday 16 January 2025 15:07:29 +0000 (0:00:00.208) 0:08:48.211 ****** 2025-01-16 15:08:15.219861 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219868 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219876 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219884 | orchestrator | 2025-01-16 15:08:15.219892 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-01-16 15:08:15.219900 | orchestrator | Thursday 16 January 2025 15:07:29 +0000 (0:00:00.199) 0:08:48.410 ****** 2025-01-16 15:08:15.219908 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.219915 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.219920 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.219925 | orchestrator | 2025-01-16 15:08:15.219930 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-01-16 15:08:15.219938 | orchestrator | Thursday 16 January 2025 15:07:30 +0000 (0:00:00.445) 0:08:48.856 ****** 2025-01-16 15:08:15.219944 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219948 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219953 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219958 | orchestrator | 2025-01-16 15:08:15.219963 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-01-16 15:08:15.219968 | orchestrator | Thursday 16 January 2025 15:07:30 +0000 (0:00:00.347) 0:08:49.203 ****** 2025-01-16 15:08:15.219973 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.219978 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.219983 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.219987 | orchestrator | 2025-01-16 15:08:15.219992 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-01-16 15:08:15.219997 | orchestrator | Thursday 16 January 2025 15:07:30 +0000 (0:00:00.205) 0:08:49.408 ****** 2025-01-16 15:08:15.220002 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.220007 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.220012 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.220017 | orchestrator | 2025-01-16 15:08:15.220022 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-01-16 15:08:15.220027 | orchestrator | Thursday 16 January 2025 15:07:31 +0000 (0:00:00.224) 0:08:49.633 ****** 2025-01-16 15:08:15.220032 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.220036 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.220041 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.220046 | orchestrator | 2025-01-16 15:08:15.220051 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-01-16 15:08:15.220059 | orchestrator | Thursday 16 January 2025 15:07:31 +0000 (0:00:00.230) 0:08:49.864 ****** 2025-01-16 15:08:15.220064 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.220071 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.220076 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.220081 | orchestrator | 2025-01-16 15:08:15.220089 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-01-16 15:08:15.220094 | orchestrator | Thursday 16 January 2025 15:07:31 +0000 (0:00:00.354) 0:08:50.218 ****** 2025-01-16 15:08:15.220099 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220104 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220109 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220114 | orchestrator | 2025-01-16 15:08:15.220119 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-01-16 15:08:15.220124 | orchestrator | Thursday 16 January 2025 15:07:31 +0000 (0:00:00.196) 0:08:50.415 ****** 2025-01-16 15:08:15.220129 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220134 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220138 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220143 | orchestrator | 2025-01-16 15:08:15.220148 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-01-16 15:08:15.220153 | orchestrator | Thursday 16 January 2025 15:07:32 +0000 (0:00:00.198) 0:08:50.613 ****** 2025-01-16 15:08:15.220158 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220162 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220167 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220172 | orchestrator | 2025-01-16 15:08:15.220177 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-01-16 15:08:15.220182 | orchestrator | Thursday 16 January 2025 15:07:32 +0000 (0:00:00.194) 0:08:50.808 ****** 2025-01-16 15:08:15.220187 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.220191 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.220196 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.220201 | orchestrator | 2025-01-16 15:08:15.220206 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-01-16 15:08:15.220211 | orchestrator | Thursday 16 January 2025 15:07:32 +0000 (0:00:00.350) 0:08:51.158 ****** 2025-01-16 15:08:15.220219 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220224 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220229 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220234 | orchestrator | 2025-01-16 15:08:15.220239 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-01-16 15:08:15.220243 | orchestrator | Thursday 16 January 2025 15:07:32 +0000 (0:00:00.231) 0:08:51.390 ****** 2025-01-16 15:08:15.220248 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220253 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220258 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220263 | orchestrator | 2025-01-16 15:08:15.220268 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-01-16 15:08:15.220273 | orchestrator | Thursday 16 January 2025 15:07:33 +0000 (0:00:00.225) 0:08:51.616 ****** 2025-01-16 15:08:15.220277 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220282 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220287 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220292 | orchestrator | 2025-01-16 15:08:15.220297 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-01-16 15:08:15.220302 | orchestrator | Thursday 16 January 2025 15:07:33 +0000 (0:00:00.222) 0:08:51.838 ****** 2025-01-16 15:08:15.220309 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220314 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220319 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220324 | orchestrator | 2025-01-16 15:08:15.220329 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-01-16 15:08:15.220334 | orchestrator | Thursday 16 January 2025 15:07:33 +0000 (0:00:00.361) 0:08:52.199 ****** 2025-01-16 15:08:15.220338 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220343 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220348 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220353 | orchestrator | 2025-01-16 15:08:15.220358 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-01-16 15:08:15.220363 | orchestrator | Thursday 16 January 2025 15:07:33 +0000 (0:00:00.224) 0:08:52.424 ****** 2025-01-16 15:08:15.220367 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220372 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220377 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220382 | orchestrator | 2025-01-16 15:08:15.220387 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-01-16 15:08:15.220391 | orchestrator | Thursday 16 January 2025 15:07:34 +0000 (0:00:00.202) 0:08:52.626 ****** 2025-01-16 15:08:15.220396 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220401 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220406 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220411 | orchestrator | 2025-01-16 15:08:15.220416 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-01-16 15:08:15.220421 | orchestrator | Thursday 16 January 2025 15:07:34 +0000 (0:00:00.214) 0:08:52.840 ****** 2025-01-16 15:08:15.220425 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220430 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220435 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220440 | orchestrator | 2025-01-16 15:08:15.220445 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-01-16 15:08:15.220450 | orchestrator | Thursday 16 January 2025 15:07:34 +0000 (0:00:00.346) 0:08:53.187 ****** 2025-01-16 15:08:15.220455 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220459 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220464 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220469 | orchestrator | 2025-01-16 15:08:15.220474 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-01-16 15:08:15.220479 | orchestrator | Thursday 16 January 2025 15:07:34 +0000 (0:00:00.206) 0:08:53.393 ****** 2025-01-16 15:08:15.220487 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220492 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220496 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220501 | orchestrator | 2025-01-16 15:08:15.220506 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-01-16 15:08:15.220514 | orchestrator | Thursday 16 January 2025 15:07:35 +0000 (0:00:00.214) 0:08:53.607 ****** 2025-01-16 15:08:15.220519 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220524 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220528 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220533 | orchestrator | 2025-01-16 15:08:15.220538 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-01-16 15:08:15.220543 | orchestrator | Thursday 16 January 2025 15:07:35 +0000 (0:00:00.211) 0:08:53.819 ****** 2025-01-16 15:08:15.220548 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220566 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220571 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220576 | orchestrator | 2025-01-16 15:08:15.220581 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-01-16 15:08:15.220590 | orchestrator | Thursday 16 January 2025 15:07:35 +0000 (0:00:00.334) 0:08:54.153 ****** 2025-01-16 15:08:15.220595 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.220600 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-01-16 15:08:15.220605 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220613 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.220617 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-01-16 15:08:15.220622 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220627 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.220632 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-01-16 15:08:15.220637 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220642 | orchestrator | 2025-01-16 15:08:15.220647 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-01-16 15:08:15.220654 | orchestrator | Thursday 16 January 2025 15:07:35 +0000 (0:00:00.238) 0:08:54.391 ****** 2025-01-16 15:08:15.220659 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-01-16 15:08:15.220664 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-01-16 15:08:15.220669 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220674 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-01-16 15:08:15.220679 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-01-16 15:08:15.220684 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220689 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-01-16 15:08:15.220694 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-01-16 15:08:15.220698 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220703 | orchestrator | 2025-01-16 15:08:15.220708 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-01-16 15:08:15.220713 | orchestrator | Thursday 16 January 2025 15:07:36 +0000 (0:00:00.237) 0:08:54.629 ****** 2025-01-16 15:08:15.220718 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220723 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220728 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220733 | orchestrator | 2025-01-16 15:08:15.220738 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-01-16 15:08:15.220746 | orchestrator | Thursday 16 January 2025 15:07:36 +0000 (0:00:00.210) 0:08:54.839 ****** 2025-01-16 15:08:15.220753 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220761 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220769 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220776 | orchestrator | 2025-01-16 15:08:15.220785 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:08:15.220795 | orchestrator | Thursday 16 January 2025 15:07:36 +0000 (0:00:00.363) 0:08:55.203 ****** 2025-01-16 15:08:15.220800 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220805 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220810 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220815 | orchestrator | 2025-01-16 15:08:15.220820 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:08:15.220824 | orchestrator | Thursday 16 January 2025 15:07:36 +0000 (0:00:00.203) 0:08:55.406 ****** 2025-01-16 15:08:15.220829 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220834 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220839 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220844 | orchestrator | 2025-01-16 15:08:15.220848 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:08:15.220853 | orchestrator | Thursday 16 January 2025 15:07:37 +0000 (0:00:00.221) 0:08:55.628 ****** 2025-01-16 15:08:15.220858 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220863 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220868 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220872 | orchestrator | 2025-01-16 15:08:15.220877 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:08:15.220882 | orchestrator | Thursday 16 January 2025 15:07:37 +0000 (0:00:00.212) 0:08:55.840 ****** 2025-01-16 15:08:15.220887 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220892 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.220897 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.220902 | orchestrator | 2025-01-16 15:08:15.220907 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:08:15.220912 | orchestrator | Thursday 16 January 2025 15:07:37 +0000 (0:00:00.361) 0:08:56.202 ****** 2025-01-16 15:08:15.220916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.220921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.220926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.220931 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220936 | orchestrator | 2025-01-16 15:08:15.220941 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:08:15.220946 | orchestrator | Thursday 16 January 2025 15:07:37 +0000 (0:00:00.284) 0:08:56.487 ****** 2025-01-16 15:08:15.220950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.220955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.220963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.220968 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.220973 | orchestrator | 2025-01-16 15:08:15.220978 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:08:15.220983 | orchestrator | Thursday 16 January 2025 15:07:38 +0000 (0:00:00.281) 0:08:56.768 ****** 2025-01-16 15:08:15.220987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.220992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.220997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.221002 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221007 | orchestrator | 2025-01-16 15:08:15.221012 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.221017 | orchestrator | Thursday 16 January 2025 15:07:38 +0000 (0:00:00.284) 0:08:57.052 ****** 2025-01-16 15:08:15.221021 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221026 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221031 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221036 | orchestrator | 2025-01-16 15:08:15.221041 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:08:15.221049 | orchestrator | Thursday 16 January 2025 15:07:38 +0000 (0:00:00.212) 0:08:57.265 ****** 2025-01-16 15:08:15.221054 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.221059 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221064 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.221069 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221073 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.221078 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221083 | orchestrator | 2025-01-16 15:08:15.221088 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:08:15.221093 | orchestrator | Thursday 16 January 2025 15:07:39 +0000 (0:00:00.305) 0:08:57.571 ****** 2025-01-16 15:08:15.221098 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221103 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221107 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221112 | orchestrator | 2025-01-16 15:08:15.221117 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:08:15.221122 | orchestrator | Thursday 16 January 2025 15:07:39 +0000 (0:00:00.364) 0:08:57.935 ****** 2025-01-16 15:08:15.221127 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221132 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221137 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221141 | orchestrator | 2025-01-16 15:08:15.221146 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:08:15.221151 | orchestrator | Thursday 16 January 2025 15:07:39 +0000 (0:00:00.215) 0:08:58.150 ****** 2025-01-16 15:08:15.221156 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:08:15.221161 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221166 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:08:15.221170 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221175 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:08:15.221180 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221185 | orchestrator | 2025-01-16 15:08:15.221190 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:08:15.221195 | orchestrator | Thursday 16 January 2025 15:07:39 +0000 (0:00:00.309) 0:08:58.460 ****** 2025-01-16 15:08:15.221200 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.221204 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221210 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.221215 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221220 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-01-16 15:08:15.221224 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221229 | orchestrator | 2025-01-16 15:08:15.221234 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:08:15.221239 | orchestrator | Thursday 16 January 2025 15:07:40 +0000 (0:00:00.218) 0:08:58.678 ****** 2025-01-16 15:08:15.221244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.221252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.221257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.221262 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221266 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-01-16 15:08:15.221271 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-01-16 15:08:15.221276 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-01-16 15:08:15.221281 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221289 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-01-16 15:08:15.221294 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-01-16 15:08:15.221299 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-01-16 15:08:15.221304 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221309 | orchestrator | 2025-01-16 15:08:15.221314 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-01-16 15:08:15.221319 | orchestrator | Thursday 16 January 2025 15:07:40 +0000 (0:00:00.562) 0:08:59.241 ****** 2025-01-16 15:08:15.221324 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221329 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221333 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221338 | orchestrator | 2025-01-16 15:08:15.221343 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-01-16 15:08:15.221351 | orchestrator | Thursday 16 January 2025 15:07:41 +0000 (0:00:00.359) 0:08:59.601 ****** 2025-01-16 15:08:15.221356 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.221361 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221368 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-01-16 15:08:15.221373 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221378 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-01-16 15:08:15.221383 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221388 | orchestrator | 2025-01-16 15:08:15.221393 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-01-16 15:08:15.221398 | orchestrator | Thursday 16 January 2025 15:07:41 +0000 (0:00:00.527) 0:09:00.129 ****** 2025-01-16 15:08:15.221403 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221408 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221412 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221417 | orchestrator | 2025-01-16 15:08:15.221422 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-01-16 15:08:15.221427 | orchestrator | Thursday 16 January 2025 15:07:41 +0000 (0:00:00.362) 0:09:00.491 ****** 2025-01-16 15:08:15.221432 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221436 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221441 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221446 | orchestrator | 2025-01-16 15:08:15.221451 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-01-16 15:08:15.221456 | orchestrator | Thursday 16 January 2025 15:07:42 +0000 (0:00:00.505) 0:09:00.996 ****** 2025-01-16 15:08:15.221461 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.221465 | orchestrator | 2025-01-16 15:08:15.221470 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-01-16 15:08:15.221475 | orchestrator | Thursday 16 January 2025 15:07:42 +0000 (0:00:00.349) 0:09:01.345 ****** 2025-01-16 15:08:15.221480 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-01-16 15:08:15.221485 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-01-16 15:08:15.221490 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-01-16 15:08:15.221495 | orchestrator | 2025-01-16 15:08:15.221499 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-01-16 15:08:15.221504 | orchestrator | Thursday 16 January 2025 15:07:43 +0000 (0:00:00.587) 0:09:01.933 ****** 2025-01-16 15:08:15.221509 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:08:15.221514 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.221519 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-01-16 15:08:15.221524 | orchestrator | 2025-01-16 15:08:15.221529 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-01-16 15:08:15.221533 | orchestrator | Thursday 16 January 2025 15:07:44 +0000 (0:00:01.153) 0:09:03.087 ****** 2025-01-16 15:08:15.221541 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-01-16 15:08:15.221546 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-01-16 15:08:15.221564 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.221570 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-01-16 15:08:15.221575 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-01-16 15:08:15.221580 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.221585 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-01-16 15:08:15.221590 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-01-16 15:08:15.221594 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.221599 | orchestrator | 2025-01-16 15:08:15.221604 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-01-16 15:08:15.221609 | orchestrator | Thursday 16 January 2025 15:07:45 +0000 (0:00:00.703) 0:09:03.790 ****** 2025-01-16 15:08:15.221614 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221619 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221624 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221628 | orchestrator | 2025-01-16 15:08:15.221633 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-01-16 15:08:15.221638 | orchestrator | Thursday 16 January 2025 15:07:45 +0000 (0:00:00.203) 0:09:03.993 ****** 2025-01-16 15:08:15.221643 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221648 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221652 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221657 | orchestrator | 2025-01-16 15:08:15.221662 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-01-16 15:08:15.221667 | orchestrator | Thursday 16 January 2025 15:07:45 +0000 (0:00:00.338) 0:09:04.331 ****** 2025-01-16 15:08:15.221672 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-01-16 15:08:15.221677 | orchestrator | 2025-01-16 15:08:15.221682 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-01-16 15:08:15.221686 | orchestrator | Thursday 16 January 2025 15:07:45 +0000 (0:00:00.147) 0:09:04.479 ****** 2025-01-16 15:08:15.221691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221719 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221724 | orchestrator | 2025-01-16 15:08:15.221729 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-01-16 15:08:15.221733 | orchestrator | Thursday 16 January 2025 15:07:46 +0000 (0:00:00.443) 0:09:04.923 ****** 2025-01-16 15:08:15.221738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221769 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221774 | orchestrator | 2025-01-16 15:08:15.221779 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-01-16 15:08:15.221784 | orchestrator | Thursday 16 January 2025 15:07:46 +0000 (0:00:00.558) 0:09:05.481 ****** 2025-01-16 15:08:15.221789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-01-16 15:08:15.221813 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221818 | orchestrator | 2025-01-16 15:08:15.221823 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-01-16 15:08:15.221830 | orchestrator | Thursday 16 January 2025 15:07:47 +0000 (0:00:00.576) 0:09:06.058 ****** 2025-01-16 15:08:15.221835 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-01-16 15:08:15.221841 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-01-16 15:08:15.221861 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-01-16 15:08:15.221867 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-01-16 15:08:15.221872 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-01-16 15:08:15.221877 | orchestrator | 2025-01-16 15:08:15.221881 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-01-16 15:08:15.221886 | orchestrator | Thursday 16 January 2025 15:08:03 +0000 (0:00:16.199) 0:09:22.257 ****** 2025-01-16 15:08:15.221891 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221896 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221901 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221906 | orchestrator | 2025-01-16 15:08:15.221911 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-01-16 15:08:15.221915 | orchestrator | Thursday 16 January 2025 15:08:03 +0000 (0:00:00.216) 0:09:22.474 ****** 2025-01-16 15:08:15.221920 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.221925 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.221930 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.221935 | orchestrator | 2025-01-16 15:08:15.221940 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-01-16 15:08:15.221944 | orchestrator | Thursday 16 January 2025 15:08:04 +0000 (0:00:00.289) 0:09:22.763 ****** 2025-01-16 15:08:15.221949 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.221954 | orchestrator | 2025-01-16 15:08:15.221959 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-01-16 15:08:15.221964 | orchestrator | Thursday 16 January 2025 15:08:04 +0000 (0:00:00.364) 0:09:23.128 ****** 2025-01-16 15:08:15.221975 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.221980 | orchestrator | 2025-01-16 15:08:15.221985 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-01-16 15:08:15.221992 | orchestrator | Thursday 16 January 2025 15:08:05 +0000 (0:00:00.479) 0:09:23.607 ****** 2025-01-16 15:08:15.221997 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.222002 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.222007 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.222011 | orchestrator | 2025-01-16 15:08:15.222037 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-01-16 15:08:15.222042 | orchestrator | Thursday 16 January 2025 15:08:05 +0000 (0:00:00.713) 0:09:24.320 ****** 2025-01-16 15:08:15.222047 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.222052 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.222057 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.222062 | orchestrator | 2025-01-16 15:08:15.222067 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-01-16 15:08:15.222072 | orchestrator | Thursday 16 January 2025 15:08:06 +0000 (0:00:00.716) 0:09:25.036 ****** 2025-01-16 15:08:15.222076 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.222081 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.222086 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.222091 | orchestrator | 2025-01-16 15:08:15.222096 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-01-16 15:08:15.222101 | orchestrator | Thursday 16 January 2025 15:08:07 +0000 (0:00:01.259) 0:09:26.296 ****** 2025-01-16 15:08:15.222106 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-01-16 15:08:15.222111 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-01-16 15:08:15.222116 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-01-16 15:08:15.222121 | orchestrator | 2025-01-16 15:08:15.222126 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-01-16 15:08:15.222131 | orchestrator | Thursday 16 January 2025 15:08:09 +0000 (0:00:01.326) 0:09:27.623 ****** 2025-01-16 15:08:15.222135 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.222140 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:08:15.222145 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:08:15.222150 | orchestrator | 2025-01-16 15:08:15.222155 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-01-16 15:08:15.222160 | orchestrator | Thursday 16 January 2025 15:08:09 +0000 (0:00:00.872) 0:09:28.495 ****** 2025-01-16 15:08:15.222165 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.222169 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.222174 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.222179 | orchestrator | 2025-01-16 15:08:15.222184 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-01-16 15:08:15.222189 | orchestrator | Thursday 16 January 2025 15:08:10 +0000 (0:00:00.496) 0:09:28.991 ****** 2025-01-16 15:08:15.222194 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:08:15.222199 | orchestrator | 2025-01-16 15:08:15.222203 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-01-16 15:08:15.222208 | orchestrator | Thursday 16 January 2025 15:08:10 +0000 (0:00:00.471) 0:09:29.463 ****** 2025-01-16 15:08:15.222213 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.222218 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.222223 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.222228 | orchestrator | 2025-01-16 15:08:15.222233 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-01-16 15:08:15.222242 | orchestrator | Thursday 16 January 2025 15:08:11 +0000 (0:00:00.225) 0:09:29.688 ****** 2025-01-16 15:08:15.222247 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.222252 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.222257 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.222262 | orchestrator | 2025-01-16 15:08:15.222267 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-01-16 15:08:15.222272 | orchestrator | Thursday 16 January 2025 15:08:11 +0000 (0:00:00.751) 0:09:30.440 ****** 2025-01-16 15:08:15.222276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:08:15.222284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:08:15.222289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:08:15.222294 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:08:15.222299 | orchestrator | 2025-01-16 15:08:15.222304 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-01-16 15:08:15.222308 | orchestrator | Thursday 16 January 2025 15:08:12 +0000 (0:00:00.630) 0:09:31.070 ****** 2025-01-16 15:08:15.222313 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:08:15.222318 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:08:15.222323 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:08:15.222328 | orchestrator | 2025-01-16 15:08:15.222333 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-01-16 15:08:15.222342 | orchestrator | Thursday 16 January 2025 15:08:12 +0000 (0:00:00.362) 0:09:31.433 ****** 2025-01-16 15:08:15.222347 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:08:15.222354 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:08:15.222360 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:08:15.222364 | orchestrator | 2025-01-16 15:08:15.222369 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:08:15.222374 | orchestrator | testbed-node-0 : ok=120  changed=33  unreachable=0 failed=0 skipped=274  rescued=0 ignored=0 2025-01-16 15:08:15.222383 | orchestrator | testbed-node-1 : ok=116  changed=32  unreachable=0 failed=0 skipped=263  rescued=0 ignored=0 2025-01-16 15:08:15.222391 | orchestrator | testbed-node-2 : ok=123  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-01-16 15:08:18.209547 | orchestrator | testbed-node-3 : ok=184  changed=50  unreachable=0 failed=0 skipped=366  rescued=0 ignored=0 2025-01-16 15:08:18.209669 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=310  rescued=0 ignored=0 2025-01-16 15:08:18.209723 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=308  rescued=0 ignored=0 2025-01-16 15:08:18.209734 | orchestrator | 2025-01-16 15:08:18.209743 | orchestrator | 2025-01-16 15:08:18.209754 | orchestrator | 2025-01-16 15:08:18.209763 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:08:18.209775 | orchestrator | Thursday 16 January 2025 15:08:13 +0000 (0:00:00.745) 0:09:32.179 ****** 2025-01-16 15:08:18.209783 | orchestrator | =============================================================================== 2025-01-16 15:08:18.209792 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 25.98s 2025-01-16 15:08:18.209801 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 20.96s 2025-01-16 15:08:18.209809 | orchestrator | ceph-container-common : pulling nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy image -- 17.64s 2025-01-16 15:08:18.209818 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 16.36s 2025-01-16 15:08:18.209827 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 16.20s 2025-01-16 15:08:18.209860 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 12.65s 2025-01-16 15:08:18.209869 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 11.68s 2025-01-16 15:08:18.209877 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.28s 2025-01-16 15:08:18.209885 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 5.21s 2025-01-16 15:08:18.209894 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 4.88s 2025-01-16 15:08:18.209902 | orchestrator | ceph-config : create ceph initial directories --------------------------- 4.64s 2025-01-16 15:08:18.209911 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 4.57s 2025-01-16 15:08:18.209919 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.44s 2025-01-16 15:08:18.209928 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 4.32s 2025-01-16 15:08:18.209936 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 3.36s 2025-01-16 15:08:18.209944 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 2.86s 2025-01-16 15:08:18.209953 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 2.70s 2025-01-16 15:08:18.209961 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 2.50s 2025-01-16 15:08:18.209969 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 2.46s 2025-01-16 15:08:18.209977 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 2.39s 2025-01-16 15:08:18.209986 | orchestrator | 2025-01-16 15:08:15 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:18.209995 | orchestrator | 2025-01-16 15:08:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:18.210065 | orchestrator | 2025-01-16 15:08:18 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:21.236889 | orchestrator | 2025-01-16 15:08:18 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:21.237046 | orchestrator | 2025-01-16 15:08:18 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:21.237067 | orchestrator | 2025-01-16 15:08:18 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:21.237102 | orchestrator | 2025-01-16 15:08:21 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:21.237453 | orchestrator | 2025-01-16 15:08:21 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:21.237493 | orchestrator | 2025-01-16 15:08:21 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:24.258714 | orchestrator | 2025-01-16 15:08:21 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:24.258826 | orchestrator | 2025-01-16 15:08:24 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:24.260481 | orchestrator | 2025-01-16 15:08:24 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:24.260967 | orchestrator | 2025-01-16 15:08:24 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:24.261096 | orchestrator | 2025-01-16 15:08:24 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:27.286075 | orchestrator | 2025-01-16 15:08:27 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:27.286629 | orchestrator | 2025-01-16 15:08:27 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:27.286853 | orchestrator | 2025-01-16 15:08:27 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:30.317254 | orchestrator | 2025-01-16 15:08:27 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:30.317407 | orchestrator | 2025-01-16 15:08:30 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:30.317890 | orchestrator | 2025-01-16 15:08:30 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:30.317949 | orchestrator | 2025-01-16 15:08:30 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:33.344926 | orchestrator | 2025-01-16 15:08:30 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:33.345020 | orchestrator | 2025-01-16 15:08:33 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:36.369730 | orchestrator | 2025-01-16 15:08:33 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:36.369878 | orchestrator | 2025-01-16 15:08:33 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:36.369910 | orchestrator | 2025-01-16 15:08:33 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:36.369958 | orchestrator | 2025-01-16 15:08:36 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:36.371105 | orchestrator | 2025-01-16 15:08:36 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:36.371202 | orchestrator | 2025-01-16 15:08:36 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:39.400138 | orchestrator | 2025-01-16 15:08:36 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:39.400248 | orchestrator | 2025-01-16 15:08:39 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:39.400785 | orchestrator | 2025-01-16 15:08:39 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:39.401191 | orchestrator | 2025-01-16 15:08:39 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:39.401279 | orchestrator | 2025-01-16 15:08:39 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:42.424464 | orchestrator | 2025-01-16 15:08:42 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:42.424765 | orchestrator | 2025-01-16 15:08:42 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:42.424796 | orchestrator | 2025-01-16 15:08:42 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:42.424819 | orchestrator | 2025-01-16 15:08:42 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:45.448089 | orchestrator | 2025-01-16 15:08:45 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:45.448522 | orchestrator | 2025-01-16 15:08:45 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:45.450005 | orchestrator | 2025-01-16 15:08:45 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:48.468707 | orchestrator | 2025-01-16 15:08:45 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:48.468826 | orchestrator | 2025-01-16 15:08:48 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:51.486140 | orchestrator | 2025-01-16 15:08:48 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:51.486265 | orchestrator | 2025-01-16 15:08:48 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:51.486287 | orchestrator | 2025-01-16 15:08:48 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:51.486351 | orchestrator | 2025-01-16 15:08:51 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:54.505500 | orchestrator | 2025-01-16 15:08:51 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:54.505792 | orchestrator | 2025-01-16 15:08:51 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:54.505829 | orchestrator | 2025-01-16 15:08:51 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:54.506100 | orchestrator | 2025-01-16 15:08:54 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:08:57.524460 | orchestrator | 2025-01-16 15:08:54 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:08:57.524592 | orchestrator | 2025-01-16 15:08:54 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:08:57.524603 | orchestrator | 2025-01-16 15:08:54 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:08:57.524622 | orchestrator | 2025-01-16 15:08:57 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:00.541999 | orchestrator | 2025-01-16 15:08:57 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:00.542117 | orchestrator | 2025-01-16 15:08:57 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:09:00.542125 | orchestrator | 2025-01-16 15:08:57 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:00.542142 | orchestrator | 2025-01-16 15:09:00 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:00.542609 | orchestrator | 2025-01-16 15:09:00 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:00.543100 | orchestrator | 2025-01-16 15:09:00 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:09:03.574706 | orchestrator | 2025-01-16 15:09:00 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:03.574839 | orchestrator | 2025-01-16 15:09:03 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:06.592502 | orchestrator | 2025-01-16 15:09:03 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:06.592719 | orchestrator | 2025-01-16 15:09:03 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:09:06.592744 | orchestrator | 2025-01-16 15:09:03 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:06.592780 | orchestrator | 2025-01-16 15:09:06 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:09.614323 | orchestrator | 2025-01-16 15:09:06 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:09.614930 | orchestrator | 2025-01-16 15:09:06 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:09:09.614951 | orchestrator | 2025-01-16 15:09:06 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:09.614969 | orchestrator | 2025-01-16 15:09:09 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:12.632312 | orchestrator | 2025-01-16 15:09:09 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:12.632450 | orchestrator | 2025-01-16 15:09:09 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:09:12.632478 | orchestrator | 2025-01-16 15:09:09 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:12.632515 | orchestrator | 2025-01-16 15:09:12 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:15.650460 | orchestrator | 2025-01-16 15:09:12 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:15.650757 | orchestrator | 2025-01-16 15:09:12 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:09:15.650792 | orchestrator | 2025-01-16 15:09:12 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:15.650831 | orchestrator | 2025-01-16 15:09:15 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:15.651297 | orchestrator | 2025-01-16 15:09:15 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:15.651343 | orchestrator | 2025-01-16 15:09:15 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:09:18.675049 | orchestrator | 2025-01-16 15:09:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:18.675159 | orchestrator | 2025-01-16 15:09:18 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:21.690011 | orchestrator | 2025-01-16 15:09:18 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:21.690224 | orchestrator | 2025-01-16 15:09:18 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state STARTED 2025-01-16 15:09:21.690246 | orchestrator | 2025-01-16 15:09:18 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:21.690281 | orchestrator | 2025-01-16 15:09:21 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:21.693716 | orchestrator | 2025-01-16 15:09:21 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:21.693822 | orchestrator | 2025-01-16 15:09:21 | INFO  | Task 1b3262fd-6795-4129-addf-4d4ddd4ed0dd is in state SUCCESS 2025-01-16 15:09:21.693872 | orchestrator | 2025-01-16 15:09:21.693900 | orchestrator | 2025-01-16 15:09:21.693924 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:09:21.693950 | orchestrator | 2025-01-16 15:09:21.693975 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:09:21.693999 | orchestrator | Thursday 16 January 2025 15:08:13 +0000 (0:00:00.197) 0:00:00.197 ****** 2025-01-16 15:09:21.694092 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.694112 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.694126 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.694140 | orchestrator | 2025-01-16 15:09:21.694155 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:09:21.694169 | orchestrator | Thursday 16 January 2025 15:08:13 +0000 (0:00:00.249) 0:00:00.447 ****** 2025-01-16 15:09:21.694184 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-01-16 15:09:21.694209 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-01-16 15:09:21.694235 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-01-16 15:09:21.694259 | orchestrator | 2025-01-16 15:09:21.694284 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-01-16 15:09:21.694312 | orchestrator | 2025-01-16 15:09:21.694337 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-01-16 15:09:21.694361 | orchestrator | Thursday 16 January 2025 15:08:14 +0000 (0:00:00.192) 0:00:00.639 ****** 2025-01-16 15:09:21.694385 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:09:21.694411 | orchestrator | 2025-01-16 15:09:21.694448 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-01-16 15:09:21.694474 | orchestrator | Thursday 16 January 2025 15:08:14 +0000 (0:00:00.438) 0:00:01.077 ****** 2025-01-16 15:09:21.694508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:09:21.694636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:09:21.694684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:09:21.694713 | orchestrator | 2025-01-16 15:09:21.694738 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-01-16 15:09:21.694764 | orchestrator | Thursday 16 January 2025 15:08:15 +0000 (0:00:01.355) 0:00:02.433 ****** 2025-01-16 15:09:21.694788 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.694813 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.694836 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.694860 | orchestrator | 2025-01-16 15:09:21.694886 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-01-16 15:09:21.694910 | orchestrator | Thursday 16 January 2025 15:08:16 +0000 (0:00:00.195) 0:00:02.629 ****** 2025-01-16 15:09:21.694947 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-01-16 15:09:21.694971 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-01-16 15:09:21.694995 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-01-16 15:09:21.695019 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-01-16 15:09:21.695042 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-01-16 15:09:21.695068 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-01-16 15:09:21.695092 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-01-16 15:09:21.695118 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-01-16 15:09:21.695143 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-01-16 15:09:21.695183 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-01-16 15:09:21.695209 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-01-16 15:09:21.695235 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-01-16 15:09:21.695259 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-01-16 15:09:21.695283 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-01-16 15:09:21.695308 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-01-16 15:09:21.695333 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-01-16 15:09:21.695359 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-01-16 15:09:21.695384 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-01-16 15:09:21.695409 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-01-16 15:09:21.695436 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-01-16 15:09:21.695462 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'designate', 'enabled': True}) 2025-01-16 15:09:21.695490 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'glance', 'enabled': True}) 2025-01-16 15:09:21.695517 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'heat', 'enabled': True}) 2025-01-16 15:09:21.695629 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'ironic', 'enabled': True}) 2025-01-16 15:09:21.695659 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'keystone', 'enabled': True}) 2025-01-16 15:09:21.695684 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'magnum', 'enabled': True}) 2025-01-16 15:09:21.695710 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'manila', 'enabled': True}) 2025-01-16 15:09:21.695735 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'neutron', 'enabled': True}) 2025-01-16 15:09:21.695762 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'nova', 'enabled': True}) 2025-01-16 15:09:21.695789 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item={'name': 'octavia', 'enabled': True}) 2025-01-16 15:09:21.695812 | orchestrator | 2025-01-16 15:09:21.695837 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.695863 | orchestrator | Thursday 16 January 2025 15:08:16 +0000 (0:00:00.658) 0:00:03.288 ****** 2025-01-16 15:09:21.695887 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.695912 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.695935 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.695957 | orchestrator | 2025-01-16 15:09:21.695979 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.696001 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.473) 0:00:03.761 ****** 2025-01-16 15:09:21.696041 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.696066 | orchestrator | 2025-01-16 15:09:21.696088 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.696127 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.076) 0:00:03.838 ****** 2025-01-16 15:09:21.696151 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.696175 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.696197 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.696222 | orchestrator | 2025-01-16 15:09:21.696246 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.696270 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.205) 0:00:04.043 ****** 2025-01-16 15:09:21.696293 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.696316 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.696338 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.696360 | orchestrator | 2025-01-16 15:09:21.696393 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.696417 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.334) 0:00:04.378 ****** 2025-01-16 15:09:21.696439 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.696462 | orchestrator | 2025-01-16 15:09:21.696482 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.696504 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.071) 0:00:04.449 ****** 2025-01-16 15:09:21.696526 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.696576 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.696597 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.696616 | orchestrator | 2025-01-16 15:09:21.696634 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.696652 | orchestrator | Thursday 16 January 2025 15:08:18 +0000 (0:00:00.466) 0:00:04.917 ****** 2025-01-16 15:09:21.696671 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.696691 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.696712 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.696734 | orchestrator | 2025-01-16 15:09:21.696757 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.696777 | orchestrator | Thursday 16 January 2025 15:08:18 +0000 (0:00:00.336) 0:00:05.253 ****** 2025-01-16 15:09:21.696796 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.696816 | orchestrator | 2025-01-16 15:09:21.696836 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.696856 | orchestrator | Thursday 16 January 2025 15:08:18 +0000 (0:00:00.077) 0:00:05.331 ****** 2025-01-16 15:09:21.696878 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.696900 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.696921 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.696943 | orchestrator | 2025-01-16 15:09:21.696965 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.696987 | orchestrator | Thursday 16 January 2025 15:08:19 +0000 (0:00:00.274) 0:00:05.605 ****** 2025-01-16 15:09:21.697008 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.697031 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.697052 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.697075 | orchestrator | 2025-01-16 15:09:21.697096 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.697116 | orchestrator | Thursday 16 January 2025 15:08:19 +0000 (0:00:00.289) 0:00:05.895 ****** 2025-01-16 15:09:21.697136 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.697155 | orchestrator | 2025-01-16 15:09:21.697176 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.697198 | orchestrator | Thursday 16 January 2025 15:08:19 +0000 (0:00:00.073) 0:00:05.969 ****** 2025-01-16 15:09:21.697219 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.697241 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.697263 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.697309 | orchestrator | 2025-01-16 15:09:21.697331 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.697353 | orchestrator | Thursday 16 January 2025 15:08:19 +0000 (0:00:00.245) 0:00:06.214 ****** 2025-01-16 15:09:21.697376 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.697399 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.697420 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.697443 | orchestrator | 2025-01-16 15:09:21.697465 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.697486 | orchestrator | Thursday 16 January 2025 15:08:19 +0000 (0:00:00.189) 0:00:06.404 ****** 2025-01-16 15:09:21.697509 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.697531 | orchestrator | 2025-01-16 15:09:21.697578 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.697601 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.139) 0:00:06.543 ****** 2025-01-16 15:09:21.697623 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.697646 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.697668 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.697690 | orchestrator | 2025-01-16 15:09:21.697712 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.697734 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.180) 0:00:06.724 ****** 2025-01-16 15:09:21.697757 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.697779 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.697799 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.697820 | orchestrator | 2025-01-16 15:09:21.697841 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.697862 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.331) 0:00:07.055 ****** 2025-01-16 15:09:21.697883 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.697904 | orchestrator | 2025-01-16 15:09:21.697925 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.697946 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.074) 0:00:07.130 ****** 2025-01-16 15:09:21.697966 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.697986 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.698006 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.698085 | orchestrator | 2025-01-16 15:09:21.698107 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.698127 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.296) 0:00:07.427 ****** 2025-01-16 15:09:21.698146 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.698166 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.698187 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.698207 | orchestrator | 2025-01-16 15:09:21.698244 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.698274 | orchestrator | Thursday 16 January 2025 15:08:21 +0000 (0:00:00.369) 0:00:07.797 ****** 2025-01-16 15:09:21.698295 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.698315 | orchestrator | 2025-01-16 15:09:21.698336 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.698356 | orchestrator | Thursday 16 January 2025 15:08:21 +0000 (0:00:00.076) 0:00:07.873 ****** 2025-01-16 15:09:21.698377 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.698396 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.698415 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.698435 | orchestrator | 2025-01-16 15:09:21.698454 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.698474 | orchestrator | Thursday 16 January 2025 15:08:21 +0000 (0:00:00.303) 0:00:08.176 ****** 2025-01-16 15:09:21.698493 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.698513 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.698533 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.698584 | orchestrator | 2025-01-16 15:09:21.698620 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.698642 | orchestrator | Thursday 16 January 2025 15:08:21 +0000 (0:00:00.252) 0:00:08.429 ****** 2025-01-16 15:09:21.698663 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.698681 | orchestrator | 2025-01-16 15:09:21.698700 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.698720 | orchestrator | Thursday 16 January 2025 15:08:22 +0000 (0:00:00.156) 0:00:08.585 ****** 2025-01-16 15:09:21.698739 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.698758 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.698779 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.698799 | orchestrator | 2025-01-16 15:09:21.698819 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.698839 | orchestrator | Thursday 16 January 2025 15:08:22 +0000 (0:00:00.174) 0:00:08.760 ****** 2025-01-16 15:09:21.698859 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.698880 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.698926 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.698963 | orchestrator | 2025-01-16 15:09:21.698985 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.699005 | orchestrator | Thursday 16 January 2025 15:08:22 +0000 (0:00:00.260) 0:00:09.021 ****** 2025-01-16 15:09:21.699025 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.699044 | orchestrator | 2025-01-16 15:09:21.699064 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.699083 | orchestrator | Thursday 16 January 2025 15:08:22 +0000 (0:00:00.075) 0:00:09.097 ****** 2025-01-16 15:09:21.699104 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.699125 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.699144 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.699165 | orchestrator | 2025-01-16 15:09:21.699185 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.699206 | orchestrator | Thursday 16 January 2025 15:08:22 +0000 (0:00:00.248) 0:00:09.345 ****** 2025-01-16 15:09:21.699226 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.699247 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.699267 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.699287 | orchestrator | 2025-01-16 15:09:21.699307 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.699417 | orchestrator | Thursday 16 January 2025 15:08:23 +0000 (0:00:00.253) 0:00:09.598 ****** 2025-01-16 15:09:21.699443 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.699462 | orchestrator | 2025-01-16 15:09:21.699482 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.699502 | orchestrator | Thursday 16 January 2025 15:08:23 +0000 (0:00:00.072) 0:00:09.670 ****** 2025-01-16 15:09:21.699521 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.699585 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.699609 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.699629 | orchestrator | 2025-01-16 15:09:21.699649 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.699669 | orchestrator | Thursday 16 January 2025 15:08:23 +0000 (0:00:00.250) 0:00:09.920 ****** 2025-01-16 15:09:21.699689 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.699709 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.699729 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.699751 | orchestrator | 2025-01-16 15:09:21.699771 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.699793 | orchestrator | Thursday 16 January 2025 15:08:23 +0000 (0:00:00.243) 0:00:10.164 ****** 2025-01-16 15:09:21.699812 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.699833 | orchestrator | 2025-01-16 15:09:21.699853 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.699874 | orchestrator | Thursday 16 January 2025 15:08:23 +0000 (0:00:00.110) 0:00:10.275 ****** 2025-01-16 15:09:21.699918 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.699939 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.699961 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.699980 | orchestrator | 2025-01-16 15:09:21.700002 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-01-16 15:09:21.700023 | orchestrator | Thursday 16 January 2025 15:08:24 +0000 (0:00:00.482) 0:00:10.757 ****** 2025-01-16 15:09:21.700043 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:09:21.700063 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:09:21.700082 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:09:21.700105 | orchestrator | 2025-01-16 15:09:21.700127 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-01-16 15:09:21.700149 | orchestrator | Thursday 16 January 2025 15:08:24 +0000 (0:00:00.320) 0:00:11.077 ****** 2025-01-16 15:09:21.700171 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.700193 | orchestrator | 2025-01-16 15:09:21.700213 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-01-16 15:09:21.700248 | orchestrator | Thursday 16 January 2025 15:08:24 +0000 (0:00:00.153) 0:00:11.231 ****** 2025-01-16 15:09:21.700271 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.700394 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.700420 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.700442 | orchestrator | 2025-01-16 15:09:21.700464 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-01-16 15:09:21.700487 | orchestrator | Thursday 16 January 2025 15:08:24 +0000 (0:00:00.193) 0:00:11.424 ****** 2025-01-16 15:09:21.700508 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:09:21.700530 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:09:21.700576 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:09:21.700598 | orchestrator | 2025-01-16 15:09:21.700621 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-01-16 15:09:21.700642 | orchestrator | Thursday 16 January 2025 15:08:26 +0000 (0:00:01.829) 0:00:13.254 ****** 2025-01-16 15:09:21.700664 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-01-16 15:09:21.700687 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-01-16 15:09:21.700708 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-01-16 15:09:21.700742 | orchestrator | 2025-01-16 15:09:21.700765 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-01-16 15:09:21.700787 | orchestrator | Thursday 16 January 2025 15:08:28 +0000 (0:00:01.791) 0:00:15.046 ****** 2025-01-16 15:09:21.700807 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-01-16 15:09:21.700830 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-01-16 15:09:21.700852 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-01-16 15:09:21.700874 | orchestrator | 2025-01-16 15:09:21.700896 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-01-16 15:09:21.700918 | orchestrator | Thursday 16 January 2025 15:08:30 +0000 (0:00:01.701) 0:00:16.747 ****** 2025-01-16 15:09:21.700940 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-01-16 15:09:21.700961 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-01-16 15:09:21.700989 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-01-16 15:09:21.701010 | orchestrator | 2025-01-16 15:09:21.701029 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-01-16 15:09:21.701050 | orchestrator | Thursday 16 January 2025 15:08:31 +0000 (0:00:01.720) 0:00:18.467 ****** 2025-01-16 15:09:21.701085 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.701107 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.701128 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.701149 | orchestrator | 2025-01-16 15:09:21.701169 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-01-16 15:09:21.701189 | orchestrator | Thursday 16 January 2025 15:08:32 +0000 (0:00:00.279) 0:00:18.747 ****** 2025-01-16 15:09:21.701210 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.701231 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.701253 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.701280 | orchestrator | 2025-01-16 15:09:21.701301 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-01-16 15:09:21.701319 | orchestrator | Thursday 16 January 2025 15:08:32 +0000 (0:00:00.262) 0:00:19.009 ****** 2025-01-16 15:09:21.701337 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:09:21.701359 | orchestrator | 2025-01-16 15:09:21.701381 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-01-16 15:09:21.701403 | orchestrator | Thursday 16 January 2025 15:08:33 +0000 (0:00:00.663) 0:00:19.672 ****** 2025-01-16 15:09:21.701448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:09:21.701474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:09:21.701522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:09:21.701572 | orchestrator | 2025-01-16 15:09:21.701596 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-01-16 15:09:21.701618 | orchestrator | Thursday 16 January 2025 15:08:34 +0000 (0:00:01.596) 0:00:21.269 ****** 2025-01-16 15:09:21.701642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-01-16 15:09:21.701677 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.701715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-01-16 15:09:21.701750 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.701774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-01-16 15:09:21.701799 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.701822 | orchestrator | 2025-01-16 15:09:21.701844 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-01-16 15:09:21.701866 | orchestrator | Thursday 16 January 2025 15:08:35 +0000 (0:00:01.022) 0:00:22.291 ****** 2025-01-16 15:09:21.701903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-01-16 15:09:21.701939 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.701972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-01-16 15:09:21.701997 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.702064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-01-16 15:09:21.702106 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.702130 | orchestrator | 2025-01-16 15:09:21.702152 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-01-16 15:09:21.702174 | orchestrator | Thursday 16 January 2025 15:08:36 +0000 (0:00:01.192) 0:00:23.484 ****** 2025-01-16 15:09:21.702210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:09:21.702236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:09:21.702289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-01-16 15:09:21.702325 | orchestrator | 2025-01-16 15:09:21.702347 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-01-16 15:09:21.702370 | orchestrator | Thursday 16 January 2025 15:08:41 +0000 (0:00:04.439) 0:00:27.924 ****** 2025-01-16 15:09:21.702393 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:09:21.702415 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:09:21.702438 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:09:21.702461 | orchestrator | 2025-01-16 15:09:21.702484 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-01-16 15:09:21.702507 | orchestrator | Thursday 16 January 2025 15:08:41 +0000 (0:00:00.370) 0:00:28.294 ****** 2025-01-16 15:09:21.702529 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:09:21.702575 | orchestrator | 2025-01-16 15:09:21.702599 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-01-16 15:09:21.702622 | orchestrator | Thursday 16 January 2025 15:08:42 +0000 (0:00:00.732) 0:00:29.026 ****** 2025-01-16 15:09:21.702645 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:09:21.702667 | orchestrator | 2025-01-16 15:09:21.702689 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-01-16 15:09:21.702712 | orchestrator | Thursday 16 January 2025 15:08:44 +0000 (0:00:01.697) 0:00:30.724 ****** 2025-01-16 15:09:21.702734 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:09:21.702757 | orchestrator | 2025-01-16 15:09:21.702780 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-01-16 15:09:21.702810 | orchestrator | Thursday 16 January 2025 15:08:45 +0000 (0:00:01.663) 0:00:32.387 ****** 2025-01-16 15:09:21.702833 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:09:21.702855 | orchestrator | 2025-01-16 15:09:21.702878 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-01-16 15:09:21.702900 | orchestrator | Thursday 16 January 2025 15:08:53 +0000 (0:00:07.166) 0:00:39.553 ****** 2025-01-16 15:09:21.702923 | orchestrator | 2025-01-16 15:09:21.702945 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-01-16 15:09:21.702968 | orchestrator | Thursday 16 January 2025 15:08:53 +0000 (0:00:00.041) 0:00:39.595 ****** 2025-01-16 15:09:21.702990 | orchestrator | 2025-01-16 15:09:21.703013 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-01-16 15:09:21.703035 | orchestrator | Thursday 16 January 2025 15:08:53 +0000 (0:00:00.037) 0:00:39.633 ****** 2025-01-16 15:09:21.703058 | orchestrator | 2025-01-16 15:09:21.703080 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-01-16 15:09:21.703103 | orchestrator | Thursday 16 January 2025 15:08:53 +0000 (0:00:00.109) 0:00:39.743 ****** 2025-01-16 15:09:21.703125 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:09:21.703148 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:09:21.703170 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:09:21.703193 | orchestrator | 2025-01-16 15:09:21.703215 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:09:21.703239 | orchestrator | testbed-node-0 : ok=41  changed=11  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-01-16 15:09:21.703261 | orchestrator | testbed-node-1 : ok=38  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-01-16 15:09:21.703282 | orchestrator | testbed-node-2 : ok=38  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-01-16 15:09:21.703302 | orchestrator | 2025-01-16 15:09:21.703320 | orchestrator | 2025-01-16 15:09:21.703338 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:09:21.703357 | orchestrator | Thursday 16 January 2025 15:09:18 +0000 (0:00:25.495) 0:01:05.238 ****** 2025-01-16 15:09:21.703387 | orchestrator | =============================================================================== 2025-01-16 15:09:21.703408 | orchestrator | horizon : Restart horizon container ------------------------------------ 25.50s 2025-01-16 15:09:21.703430 | orchestrator | horizon : Running Horizon bootstrap container --------------------------- 7.17s 2025-01-16 15:09:21.703452 | orchestrator | horizon : Deploy horizon container -------------------------------------- 4.44s 2025-01-16 15:09:21.703475 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.83s 2025-01-16 15:09:21.703498 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.79s 2025-01-16 15:09:21.703520 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.72s 2025-01-16 15:09:21.703623 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.70s 2025-01-16 15:09:24.708833 | orchestrator | horizon : Creating Horizon database ------------------------------------- 1.70s 2025-01-16 15:09:24.709002 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 1.66s 2025-01-16 15:09:24.709023 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.60s 2025-01-16 15:09:24.709038 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.36s 2025-01-16 15:09:24.709053 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.19s 2025-01-16 15:09:24.709067 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.02s 2025-01-16 15:09:24.709082 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-01-16 15:09:24.709096 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2025-01-16 15:09:24.709110 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2025-01-16 15:09:24.709124 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2025-01-16 15:09:24.709138 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2025-01-16 15:09:24.709153 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.47s 2025-01-16 15:09:24.709187 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.44s 2025-01-16 15:09:24.709202 | orchestrator | 2025-01-16 15:09:21 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:24.709236 | orchestrator | 2025-01-16 15:09:24 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:27.730676 | orchestrator | 2025-01-16 15:09:24 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:27.730807 | orchestrator | 2025-01-16 15:09:24 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:27.730846 | orchestrator | 2025-01-16 15:09:27 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:30.750626 | orchestrator | 2025-01-16 15:09:27 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:30.750749 | orchestrator | 2025-01-16 15:09:27 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:30.750920 | orchestrator | 2025-01-16 15:09:30 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:33.771714 | orchestrator | 2025-01-16 15:09:30 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:33.771850 | orchestrator | 2025-01-16 15:09:30 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:33.771889 | orchestrator | 2025-01-16 15:09:33 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:36.789342 | orchestrator | 2025-01-16 15:09:33 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:36.789473 | orchestrator | 2025-01-16 15:09:33 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:36.789598 | orchestrator | 2025-01-16 15:09:36 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:39.806854 | orchestrator | 2025-01-16 15:09:36 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:39.806940 | orchestrator | 2025-01-16 15:09:36 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:39.806960 | orchestrator | 2025-01-16 15:09:39 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:39.807028 | orchestrator | 2025-01-16 15:09:39 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:42.825670 | orchestrator | 2025-01-16 15:09:39 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:42.825782 | orchestrator | 2025-01-16 15:09:42 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:45.848566 | orchestrator | 2025-01-16 15:09:42 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state STARTED 2025-01-16 15:09:45.848689 | orchestrator | 2025-01-16 15:09:42 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:45.848712 | orchestrator | 2025-01-16 15:09:45.848719 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-01-16 15:09:45.848726 | orchestrator | 2025-01-16 15:09:45.848782 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-01-16 15:09:45.848791 | orchestrator | 2025-01-16 15:09:45.848797 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-01-16 15:09:45.848804 | orchestrator | Thursday 16 January 2025 15:08:16 +0000 (0:00:00.745) 0:00:00.745 ****** 2025-01-16 15:09:45.848811 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:09:45.848830 | orchestrator | 2025-01-16 15:09:45.848836 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-01-16 15:09:45.848843 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.402) 0:00:01.147 ****** 2025-01-16 15:09:45.848849 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-01-16 15:09:45.848855 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-01-16 15:09:45.848861 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-01-16 15:09:45.848867 | orchestrator | 2025-01-16 15:09:45.848873 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-01-16 15:09:45.848879 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.600) 0:00:01.748 ****** 2025-01-16 15:09:45.848885 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:09:45.848891 | orchestrator | 2025-01-16 15:09:45.848897 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-01-16 15:09:45.848903 | orchestrator | Thursday 16 January 2025 15:08:18 +0000 (0:00:00.477) 0:00:02.225 ****** 2025-01-16 15:09:45.848909 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.848916 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.848922 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.848928 | orchestrator | 2025-01-16 15:09:45.848934 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-01-16 15:09:45.848940 | orchestrator | Thursday 16 January 2025 15:08:18 +0000 (0:00:00.535) 0:00:02.761 ****** 2025-01-16 15:09:45.848946 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.848952 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.848958 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.848964 | orchestrator | 2025-01-16 15:09:45.848970 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-01-16 15:09:45.848976 | orchestrator | Thursday 16 January 2025 15:08:19 +0000 (0:00:00.234) 0:00:02.995 ****** 2025-01-16 15:09:45.849155 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.849164 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.849170 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.849176 | orchestrator | 2025-01-16 15:09:45.849182 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-01-16 15:09:45.849188 | orchestrator | Thursday 16 January 2025 15:08:19 +0000 (0:00:00.578) 0:00:03.574 ****** 2025-01-16 15:09:45.849194 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.849200 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.849206 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.849211 | orchestrator | 2025-01-16 15:09:45.849217 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-01-16 15:09:45.849223 | orchestrator | Thursday 16 January 2025 15:08:19 +0000 (0:00:00.207) 0:00:03.782 ****** 2025-01-16 15:09:45.849229 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.849235 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.849241 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.849247 | orchestrator | 2025-01-16 15:09:45.849252 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-01-16 15:09:45.849258 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.195) 0:00:03.977 ****** 2025-01-16 15:09:45.849264 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.849270 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.849276 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.849282 | orchestrator | 2025-01-16 15:09:45.849287 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-01-16 15:09:45.849294 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.206) 0:00:04.184 ****** 2025-01-16 15:09:45.849300 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.849306 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.849312 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.849318 | orchestrator | 2025-01-16 15:09:45.849324 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-01-16 15:09:45.849330 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.393) 0:00:04.578 ****** 2025-01-16 15:09:45.849336 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.849342 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.849348 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.849354 | orchestrator | 2025-01-16 15:09:45.849360 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-01-16 15:09:45.849366 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.202) 0:00:04.780 ****** 2025-01-16 15:09:45.849372 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-01-16 15:09:45.849378 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:09:45.849384 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:09:45.849390 | orchestrator | 2025-01-16 15:09:45.849396 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-01-16 15:09:45.849406 | orchestrator | Thursday 16 January 2025 15:08:21 +0000 (0:00:00.503) 0:00:05.283 ****** 2025-01-16 15:09:45.849413 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.849419 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.849437 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.849443 | orchestrator | 2025-01-16 15:09:45.849449 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-01-16 15:09:45.849455 | orchestrator | Thursday 16 January 2025 15:08:21 +0000 (0:00:00.293) 0:00:05.576 ****** 2025-01-16 15:09:45.849466 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-01-16 15:09:45.849473 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:09:45.849479 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:09:45.849485 | orchestrator | 2025-01-16 15:09:45.849491 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-01-16 15:09:45.849502 | orchestrator | Thursday 16 January 2025 15:08:23 +0000 (0:00:01.399) 0:00:06.976 ****** 2025-01-16 15:09:45.849508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:09:45.849514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:09:45.849520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:09:45.849526 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.849550 | orchestrator | 2025-01-16 15:09:45.849556 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-01-16 15:09:45.849563 | orchestrator | Thursday 16 January 2025 15:08:23 +0000 (0:00:00.499) 0:00:07.476 ****** 2025-01-16 15:09:45.849573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-01-16 15:09:45.849584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-01-16 15:09:45.849590 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-01-16 15:09:45.849596 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.849603 | orchestrator | 2025-01-16 15:09:45.849609 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-01-16 15:09:45.849615 | orchestrator | Thursday 16 January 2025 15:08:23 +0000 (0:00:00.458) 0:00:07.934 ****** 2025-01-16 15:09:45.849624 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-01-16 15:09:45.849633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-01-16 15:09:45.849640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-01-16 15:09:45.849646 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.849652 | orchestrator | 2025-01-16 15:09:45.849658 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-01-16 15:09:45.849664 | orchestrator | Thursday 16 January 2025 15:08:24 +0000 (0:00:00.121) 0:00:08.056 ****** 2025-01-16 15:09:45.849672 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '439665be0bb2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-01-16 15:08:22.222701', 'end': '2025-01-16 15:08:22.241382', 'delta': '0:00:00.018681', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['439665be0bb2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-01-16 15:09:45.849692 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '72057891a3d7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-01-16 15:08:22.574243', 'end': '2025-01-16 15:08:22.594950', 'delta': '0:00:00.020707', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72057891a3d7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-01-16 15:09:45.849700 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'd38109367755', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-01-16 15:08:22.914036', 'end': '2025-01-16 15:08:22.933025', 'delta': '0:00:00.018989', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d38109367755'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-01-16 15:09:45.849706 | orchestrator | 2025-01-16 15:09:45.849713 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-01-16 15:09:45.849719 | orchestrator | Thursday 16 January 2025 15:08:24 +0000 (0:00:00.149) 0:00:08.206 ****** 2025-01-16 15:09:45.849725 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.849731 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.849737 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.849743 | orchestrator | 2025-01-16 15:09:45.849748 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-01-16 15:09:45.849754 | orchestrator | Thursday 16 January 2025 15:08:24 +0000 (0:00:00.353) 0:00:08.560 ****** 2025-01-16 15:09:45.849760 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-01-16 15:09:45.849766 | orchestrator | 2025-01-16 15:09:45.849773 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-01-16 15:09:45.849779 | orchestrator | Thursday 16 January 2025 15:08:25 +0000 (0:00:00.887) 0:00:09.447 ****** 2025-01-16 15:09:45.849786 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.849792 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.849799 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.849809 | orchestrator | 2025-01-16 15:09:45.849816 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-01-16 15:09:45.849822 | orchestrator | Thursday 16 January 2025 15:08:25 +0000 (0:00:00.316) 0:00:09.764 ****** 2025-01-16 15:09:45.849829 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.849835 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.849842 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.849848 | orchestrator | 2025-01-16 15:09:45.849855 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-01-16 15:09:45.849861 | orchestrator | Thursday 16 January 2025 15:08:26 +0000 (0:00:00.344) 0:00:10.108 ****** 2025-01-16 15:09:45.849868 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.849874 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.849881 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.849887 | orchestrator | 2025-01-16 15:09:45.849894 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-01-16 15:09:45.849901 | orchestrator | Thursday 16 January 2025 15:08:26 +0000 (0:00:00.208) 0:00:10.317 ****** 2025-01-16 15:09:45.849911 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.849917 | orchestrator | 2025-01-16 15:09:45.849924 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-01-16 15:09:45.849931 | orchestrator | Thursday 16 January 2025 15:08:26 +0000 (0:00:00.092) 0:00:10.409 ****** 2025-01-16 15:09:45.849938 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.849945 | orchestrator | 2025-01-16 15:09:45.849951 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-01-16 15:09:45.849958 | orchestrator | Thursday 16 January 2025 15:08:26 +0000 (0:00:00.169) 0:00:10.579 ****** 2025-01-16 15:09:45.849965 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.849971 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.849978 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.849985 | orchestrator | 2025-01-16 15:09:45.849991 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-01-16 15:09:45.849998 | orchestrator | Thursday 16 January 2025 15:08:26 +0000 (0:00:00.240) 0:00:10.819 ****** 2025-01-16 15:09:45.850004 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.850011 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.850056 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.850063 | orchestrator | 2025-01-16 15:09:45.850073 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-01-16 15:09:45.850079 | orchestrator | Thursday 16 January 2025 15:08:27 +0000 (0:00:00.366) 0:00:11.186 ****** 2025-01-16 15:09:45.850086 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.850092 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.850099 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.850106 | orchestrator | 2025-01-16 15:09:45.850113 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-01-16 15:09:45.850120 | orchestrator | Thursday 16 January 2025 15:08:27 +0000 (0:00:00.243) 0:00:11.429 ****** 2025-01-16 15:09:45.850126 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.850133 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.850143 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.850150 | orchestrator | 2025-01-16 15:09:45.850157 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-01-16 15:09:45.850163 | orchestrator | Thursday 16 January 2025 15:08:27 +0000 (0:00:00.257) 0:00:11.686 ****** 2025-01-16 15:09:45.850169 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.850175 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.850181 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.850187 | orchestrator | 2025-01-16 15:09:45.850193 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-01-16 15:09:45.850199 | orchestrator | Thursday 16 January 2025 15:08:27 +0000 (0:00:00.234) 0:00:11.921 ****** 2025-01-16 15:09:45.850205 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.850210 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.850216 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.850222 | orchestrator | 2025-01-16 15:09:45.850228 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-01-16 15:09:45.850234 | orchestrator | Thursday 16 January 2025 15:08:28 +0000 (0:00:00.396) 0:00:12.317 ****** 2025-01-16 15:09:45.850240 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.850246 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.850252 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.850258 | orchestrator | 2025-01-16 15:09:45.850264 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-01-16 15:09:45.850269 | orchestrator | Thursday 16 January 2025 15:08:28 +0000 (0:00:00.241) 0:00:12.558 ****** 2025-01-16 15:09:45.850276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53488163--bd74--50cc--bfa0--f1a94ed01f33-osd--block--53488163--bd74--50cc--bfa0--f1a94ed01f33', 'dm-uuid-LVM-N0DQnPOx7vvMZ9gWckNqcQrXVN0ofw1Usc1jR19jN1dhrkIszLuXtjetQJiA4xdI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--562c7eeb--0cc2--5747--a030--082dcf3dd7cc-osd--block--562c7eeb--0cc2--5747--a030--082dcf3dd7cc', 'dm-uuid-LVM-TMfc5lZ2pMOOsxqCan5tJSpOeCg5GjY2kDWg0LqkFwPvxesaptTE5VSNzRCW2Kxy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9c27d09--d80a--5255--9afb--1d5e2e5f2f02-osd--block--d9c27d09--d80a--5255--9afb--1d5e2e5f2f02', 'dm-uuid-LVM-52fmO9JMV2PHItuTk1y42oGRchjxoLr0T1j2CunOsF0BHFet4TO5M5WcuEoiy6B0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e6463fb--b573--5867--8a5d--b884b3259bdd-osd--block--9e6463fb--b573--5867--8a5d--b884b3259bdd', 'dm-uuid-LVM-VYijzicZ0f6Xa169M8PRLFLtTHFmowc6HmGHoyV9rgPdZjl2PGHRQRDdzdzbYVju'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part1', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part14', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part15', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part16', 'scsi-SQEMU_QEMU_HARDDISK_6fd23866-075a-4a28-b944-e328afaaaf4b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--53488163--bd74--50cc--bfa0--f1a94ed01f33-osd--block--53488163--bd74--50cc--bfa0--f1a94ed01f33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2teqzv-jqXh-sIIm-pvj1-H7Ld-gSnN-nKf75c', 'scsi-0QEMU_QEMU_HARDDISK_a3fa75ed-12ad-4d98-b1e3-06058efbf95a', 'scsi-SQEMU_QEMU_HARDDISK_a3fa75ed-12ad-4d98-b1e3-06058efbf95a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--562c7eeb--0cc2--5747--a030--082dcf3dd7cc-osd--block--562c7eeb--0cc2--5747--a030--082dcf3dd7cc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0InOC3-vNqk-jV0S-t0JO-vnVc-lLEY-QjyWks', 'scsi-0QEMU_QEMU_HARDDISK_0646438b-3566-4bd7-ac9f-c7444a60ff3f', 'scsi-SQEMU_QEMU_HARDDISK_0646438b-3566-4bd7-ac9f-c7444a60ff3f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_72b30f3d-ea4f-4fbe-a722-d77662b0ee19', 'scsi-SQEMU_QEMU_HARDDISK_72b30f3d-ea4f-4fbe-a722-d77662b0ee19'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part1', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part14', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part15', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part16', 'scsi-SQEMU_QEMU_HARDDISK_899cd4fc-0b48-4c8f-9f9b-d306ab958cdd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d9c27d09--d80a--5255--9afb--1d5e2e5f2f02-osd--block--d9c27d09--d80a--5255--9afb--1d5e2e5f2f02'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0Exx1e-SsRy-1z32-lkak-o2Hz-xAbz-hZptiH', 'scsi-0QEMU_QEMU_HARDDISK_d1e8c7e9-38c3-4780-8ab7-178f632f9eb8', 'scsi-SQEMU_QEMU_HARDDISK_d1e8c7e9-38c3-4780-8ab7-178f632f9eb8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850502 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.850508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9e6463fb--b573--5867--8a5d--b884b3259bdd-osd--block--9e6463fb--b573--5867--8a5d--b884b3259bdd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-32OxvT-YHAF-QogR-Ot0d-6QSn-yngZ-VRUPce', 'scsi-0QEMU_QEMU_HARDDISK_511497a6-ce11-47ca-8c02-acccaddecbc9', 'scsi-SQEMU_QEMU_HARDDISK_511497a6-ce11-47ca-8c02-acccaddecbc9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7bd705e-b5e0-4446-bf55-1dfa4188ee04', 'scsi-SQEMU_QEMU_HARDDISK_f7bd705e-b5e0-4446-bf55-1dfa4188ee04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850527 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.850595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--53007ac5--07c2--53cd--add6--e57729925218-osd--block--53007ac5--07c2--53cd--add6--e57729925218', 'dm-uuid-LVM-oQCheSm9KrUUJMm82iuOynV8eiWIUV1TGYoi3INQOEORaMRkjHi2UpRkGgiaqKDU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': 2025-01-16 15:09:45 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:45.850602 | orchestrator | 2025-01-16 15:09:45 | INFO  | Task be7674ca-28b0-439d-bfbd-76aef9632369 is in state SUCCESS 2025-01-16 15:09:45.850608 | orchestrator | 2025-01-16 15:09:45 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:45.850619 | orchestrator | None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54c8019f--0033--5b40--9c4f--7f2e43f78b89-osd--block--54c8019f--0033--5b40--9c4f--7f2e43f78b89', 'dm-uuid-LVM-yqDYE7gr3xSGUfJQD2Za48Kd0b3UBZB6F1ZEjw2Yw9onm42m2LOC3Xclx5lIdmVf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:09:45.850691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f86695f-209f-4777-b6ae-0a791cf41e6e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--53007ac5--07c2--53cd--add6--e57729925218-osd--block--53007ac5--07c2--53cd--add6--e57729925218'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Pnrdw0-BkoA-7kIZ-iZct-FPdw-sR1f-noqHVh', 'scsi-0QEMU_QEMU_HARDDISK_0aac5059-2a3a-4141-840f-fb09a7465e72', 'scsi-SQEMU_QEMU_HARDDISK_0aac5059-2a3a-4141-840f-fb09a7465e72'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--54c8019f--0033--5b40--9c4f--7f2e43f78b89-osd--block--54c8019f--0033--5b40--9c4f--7f2e43f78b89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-31xuQ3-lGqv-j3fr-wAiK-iSrf-pEcD-SUKWpi', 'scsi-0QEMU_QEMU_HARDDISK_97685de2-31d7-40a6-8026-91294c9f6af1', 'scsi-SQEMU_QEMU_HARDDISK_97685de2-31d7-40a6-8026-91294c9f6af1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d740be6b-1b5d-4ad1-85aa-7275c0983c2d', 'scsi-SQEMU_QEMU_HARDDISK_d740be6b-1b5d-4ad1-85aa-7275c0983c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:09:45.850730 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.850736 | orchestrator | 2025-01-16 15:09:45.850742 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-01-16 15:09:45.850748 | orchestrator | Thursday 16 January 2025 15:08:29 +0000 (0:00:00.564) 0:00:13.123 ****** 2025-01-16 15:09:45.850754 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-01-16 15:09:45.850760 | orchestrator | 2025-01-16 15:09:45.850766 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-01-16 15:09:45.850772 | orchestrator | Thursday 16 January 2025 15:08:29 +0000 (0:00:00.829) 0:00:13.952 ****** 2025-01-16 15:09:45.850778 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.850784 | orchestrator | 2025-01-16 15:09:45.850790 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-01-16 15:09:45.850796 | orchestrator | Thursday 16 January 2025 15:08:30 +0000 (0:00:00.099) 0:00:14.051 ****** 2025-01-16 15:09:45.850802 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.850808 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.850814 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.850820 | orchestrator | 2025-01-16 15:09:45.850826 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-01-16 15:09:45.850832 | orchestrator | Thursday 16 January 2025 15:08:30 +0000 (0:00:00.408) 0:00:14.460 ****** 2025-01-16 15:09:45.850838 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.850844 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.850850 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.850855 | orchestrator | 2025-01-16 15:09:45.850861 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-01-16 15:09:45.850867 | orchestrator | Thursday 16 January 2025 15:08:30 +0000 (0:00:00.499) 0:00:14.960 ****** 2025-01-16 15:09:45.850873 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.850879 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.850885 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.850891 | orchestrator | 2025-01-16 15:09:45.850897 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-01-16 15:09:45.850903 | orchestrator | Thursday 16 January 2025 15:08:31 +0000 (0:00:00.274) 0:00:15.234 ****** 2025-01-16 15:09:45.850909 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.850915 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.850921 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.850927 | orchestrator | 2025-01-16 15:09:45.850933 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-01-16 15:09:45.850939 | orchestrator | Thursday 16 January 2025 15:08:31 +0000 (0:00:00.430) 0:00:15.665 ****** 2025-01-16 15:09:45.850945 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.850954 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.850960 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.850966 | orchestrator | 2025-01-16 15:09:45.850972 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-01-16 15:09:45.850978 | orchestrator | Thursday 16 January 2025 15:08:32 +0000 (0:00:00.306) 0:00:15.972 ****** 2025-01-16 15:09:45.850984 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.850990 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.850996 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851002 | orchestrator | 2025-01-16 15:09:45.851008 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-01-16 15:09:45.851014 | orchestrator | Thursday 16 January 2025 15:08:32 +0000 (0:00:00.297) 0:00:16.269 ****** 2025-01-16 15:09:45.851020 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851027 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851033 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851039 | orchestrator | 2025-01-16 15:09:45.851045 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-01-16 15:09:45.851051 | orchestrator | Thursday 16 January 2025 15:08:32 +0000 (0:00:00.206) 0:00:16.475 ****** 2025-01-16 15:09:45.851057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:09:45.851066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:09:45.851073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:09:45.851079 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-01-16 15:09:45.851091 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-01-16 15:09:45.851097 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-01-16 15:09:45.851105 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-01-16 15:09:45.851112 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-01-16 15:09:45.851117 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-01-16 15:09:45.851129 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851135 | orchestrator | 2025-01-16 15:09:45.851141 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-01-16 15:09:45.851147 | orchestrator | Thursday 16 January 2025 15:08:33 +0000 (0:00:00.898) 0:00:17.373 ****** 2025-01-16 15:09:45.851153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:09:45.851159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:09:45.851165 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-01-16 15:09:45.851171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:09:45.851177 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851183 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-01-16 15:09:45.851189 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-01-16 15:09:45.851195 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-01-16 15:09:45.851201 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851207 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-01-16 15:09:45.851212 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-01-16 15:09:45.851218 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851224 | orchestrator | 2025-01-16 15:09:45.851230 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-01-16 15:09:45.851236 | orchestrator | Thursday 16 January 2025 15:08:34 +0000 (0:00:00.910) 0:00:18.284 ****** 2025-01-16 15:09:45.851242 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-01-16 15:09:45.851248 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-01-16 15:09:45.851254 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-01-16 15:09:45.851264 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-01-16 15:09:45.851270 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-01-16 15:09:45.851276 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-01-16 15:09:45.851285 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-01-16 15:09:45.851291 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-01-16 15:09:45.851296 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-01-16 15:09:45.851302 | orchestrator | 2025-01-16 15:09:45.851308 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-01-16 15:09:45.851314 | orchestrator | Thursday 16 January 2025 15:08:35 +0000 (0:00:01.154) 0:00:19.438 ****** 2025-01-16 15:09:45.851320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:09:45.851326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:09:45.851332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:09:45.851338 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-01-16 15:09:45.851344 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851350 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-01-16 15:09:45.851355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-01-16 15:09:45.851361 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851367 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-01-16 15:09:45.851373 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-01-16 15:09:45.851379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-01-16 15:09:45.851385 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851391 | orchestrator | 2025-01-16 15:09:45.851397 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-01-16 15:09:45.851403 | orchestrator | Thursday 16 January 2025 15:08:35 +0000 (0:00:00.330) 0:00:19.769 ****** 2025-01-16 15:09:45.851409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-01-16 15:09:45.851415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-01-16 15:09:45.851421 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-01-16 15:09:45.851429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-01-16 15:09:45.851435 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-01-16 15:09:45.851441 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-01-16 15:09:45.851447 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851453 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851459 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-01-16 15:09:45.851465 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-01-16 15:09:45.851471 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-01-16 15:09:45.851476 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851482 | orchestrator | 2025-01-16 15:09:45.851488 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-01-16 15:09:45.851494 | orchestrator | Thursday 16 January 2025 15:08:36 +0000 (0:00:00.468) 0:00:20.237 ****** 2025-01-16 15:09:45.851500 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-01-16 15:09:45.851507 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-01-16 15:09:45.851513 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-01-16 15:09:45.851519 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851525 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-01-16 15:09:45.851549 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-01-16 15:09:45.851559 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-01-16 15:09:45.851565 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851571 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-01-16 15:09:45.851577 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-01-16 15:09:45.851583 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-01-16 15:09:45.851589 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851595 | orchestrator | 2025-01-16 15:09:45.851601 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-01-16 15:09:45.851607 | orchestrator | Thursday 16 January 2025 15:08:36 +0000 (0:00:00.274) 0:00:20.511 ****** 2025-01-16 15:09:45.851613 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:09:45.851619 | orchestrator | 2025-01-16 15:09:45.851625 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-01-16 15:09:45.851631 | orchestrator | Thursday 16 January 2025 15:08:36 +0000 (0:00:00.367) 0:00:20.878 ****** 2025-01-16 15:09:45.851637 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851643 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851649 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851655 | orchestrator | 2025-01-16 15:09:45.851661 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-01-16 15:09:45.851667 | orchestrator | Thursday 16 January 2025 15:08:37 +0000 (0:00:00.364) 0:00:21.243 ****** 2025-01-16 15:09:45.851673 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851678 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851684 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851690 | orchestrator | 2025-01-16 15:09:45.851696 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-01-16 15:09:45.851702 | orchestrator | Thursday 16 January 2025 15:08:37 +0000 (0:00:00.296) 0:00:21.540 ****** 2025-01-16 15:09:45.851708 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851714 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851720 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851726 | orchestrator | 2025-01-16 15:09:45.851732 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-01-16 15:09:45.851738 | orchestrator | Thursday 16 January 2025 15:08:37 +0000 (0:00:00.246) 0:00:21.787 ****** 2025-01-16 15:09:45.851744 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.851750 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.851756 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.851762 | orchestrator | 2025-01-16 15:09:45.851768 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-01-16 15:09:45.851774 | orchestrator | Thursday 16 January 2025 15:08:38 +0000 (0:00:00.333) 0:00:22.120 ****** 2025-01-16 15:09:45.851780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:09:45.851786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:09:45.851792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:09:45.851798 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851804 | orchestrator | 2025-01-16 15:09:45.851810 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-01-16 15:09:45.851816 | orchestrator | Thursday 16 January 2025 15:08:38 +0000 (0:00:00.420) 0:00:22.540 ****** 2025-01-16 15:09:45.851822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:09:45.851828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:09:45.851833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:09:45.851839 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851849 | orchestrator | 2025-01-16 15:09:45.851855 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-01-16 15:09:45.851861 | orchestrator | Thursday 16 January 2025 15:08:38 +0000 (0:00:00.391) 0:00:22.932 ****** 2025-01-16 15:09:45.851867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:09:45.851872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:09:45.851878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:09:45.851884 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851890 | orchestrator | 2025-01-16 15:09:45.851896 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:09:45.851902 | orchestrator | Thursday 16 January 2025 15:08:39 +0000 (0:00:00.320) 0:00:23.253 ****** 2025-01-16 15:09:45.851908 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:09:45.851914 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:09:45.851920 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:09:45.851926 | orchestrator | 2025-01-16 15:09:45.851932 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-01-16 15:09:45.851938 | orchestrator | Thursday 16 January 2025 15:08:39 +0000 (0:00:00.246) 0:00:23.499 ****** 2025-01-16 15:09:45.851944 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-01-16 15:09:45.851950 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-01-16 15:09:45.851956 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-01-16 15:09:45.851961 | orchestrator | 2025-01-16 15:09:45.851967 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-01-16 15:09:45.851973 | orchestrator | Thursday 16 January 2025 15:08:40 +0000 (0:00:00.552) 0:00:24.052 ****** 2025-01-16 15:09:45.851979 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.851985 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.851991 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.851997 | orchestrator | 2025-01-16 15:09:45.852003 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-01-16 15:09:45.852018 | orchestrator | Thursday 16 January 2025 15:08:40 +0000 (0:00:00.259) 0:00:24.311 ****** 2025-01-16 15:09:45.852024 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.852030 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.852036 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.852042 | orchestrator | 2025-01-16 15:09:45.852048 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-01-16 15:09:45.852054 | orchestrator | Thursday 16 January 2025 15:08:40 +0000 (0:00:00.400) 0:00:24.712 ****** 2025-01-16 15:09:45.852060 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-01-16 15:09:45.852066 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.852072 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-01-16 15:09:45.852077 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.852083 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-01-16 15:09:45.852089 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.852095 | orchestrator | 2025-01-16 15:09:45.852101 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-01-16 15:09:45.852107 | orchestrator | Thursday 16 January 2025 15:08:41 +0000 (0:00:00.293) 0:00:25.005 ****** 2025-01-16 15:09:45.852113 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-01-16 15:09:45.852119 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.852125 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-01-16 15:09:45.852131 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.852137 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-01-16 15:09:45.852143 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.852151 | orchestrator | 2025-01-16 15:09:45.852157 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-01-16 15:09:45.852166 | orchestrator | Thursday 16 January 2025 15:08:41 +0000 (0:00:00.215) 0:00:25.221 ****** 2025-01-16 15:09:45.852172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-01-16 15:09:45.852178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-01-16 15:09:45.852184 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-01-16 15:09:45.852190 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-01-16 15:09:45.852196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-01-16 15:09:45.852202 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.852208 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-01-16 15:09:45.852214 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-01-16 15:09:45.852220 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-01-16 15:09:45.852226 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.852232 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-01-16 15:09:45.852238 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.852244 | orchestrator | 2025-01-16 15:09:45.852250 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-01-16 15:09:45.852256 | orchestrator | Thursday 16 January 2025 15:08:41 +0000 (0:00:00.640) 0:00:25.861 ****** 2025-01-16 15:09:45.852262 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.852268 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.852274 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:09:45.852280 | orchestrator | 2025-01-16 15:09:45.852286 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-01-16 15:09:45.852292 | orchestrator | Thursday 16 January 2025 15:08:42 +0000 (0:00:00.333) 0:00:26.195 ****** 2025-01-16 15:09:45.852297 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-01-16 15:09:45.852303 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:09:45.852309 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:09:45.852315 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-01-16 15:09:45.852321 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-01-16 15:09:45.852327 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-01-16 15:09:45.852333 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-01-16 15:09:45.852339 | orchestrator | 2025-01-16 15:09:45.852345 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-01-16 15:09:45.852351 | orchestrator | Thursday 16 January 2025 15:08:42 +0000 (0:00:00.589) 0:00:26.784 ****** 2025-01-16 15:09:45.852357 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-01-16 15:09:45.852363 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:09:45.852369 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:09:45.852375 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-01-16 15:09:45.852381 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-01-16 15:09:45.852387 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-01-16 15:09:45.852393 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-01-16 15:09:45.852399 | orchestrator | 2025-01-16 15:09:45.852405 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-01-16 15:09:45.852413 | orchestrator | Thursday 16 January 2025 15:08:43 +0000 (0:00:01.138) 0:00:27.923 ****** 2025-01-16 15:09:45.852419 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:09:45.852428 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:09:45.852434 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-01-16 15:09:45.852440 | orchestrator | 2025-01-16 15:09:45.852446 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-01-16 15:09:45.852452 | orchestrator | Thursday 16 January 2025 15:08:44 +0000 (0:00:00.445) 0:00:28.369 ****** 2025-01-16 15:09:45.852459 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-01-16 15:09:45.852467 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-01-16 15:09:45.852473 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-01-16 15:09:45.852480 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-01-16 15:09:45.852486 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-01-16 15:09:45.852492 | orchestrator | 2025-01-16 15:09:45.852498 | orchestrator | TASK [generate keys] *********************************************************** 2025-01-16 15:09:45.852504 | orchestrator | Thursday 16 January 2025 15:09:15 +0000 (0:00:31.396) 0:00:59.765 ****** 2025-01-16 15:09:45.852512 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852518 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852524 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852546 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852556 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852565 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852575 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-01-16 15:09:45.852583 | orchestrator | 2025-01-16 15:09:45.852592 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-01-16 15:09:45.852601 | orchestrator | Thursday 16 January 2025 15:09:28 +0000 (0:00:12.553) 0:01:12.319 ****** 2025-01-16 15:09:45.852610 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852620 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852629 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852638 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852648 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852654 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852660 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-01-16 15:09:45.852670 | orchestrator | 2025-01-16 15:09:45.852676 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-01-16 15:09:45.852682 | orchestrator | Thursday 16 January 2025 15:09:34 +0000 (0:00:06.262) 0:01:18.582 ****** 2025-01-16 15:09:45.852688 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852693 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-01-16 15:09:45.852699 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-01-16 15:09:45.852705 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852711 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-01-16 15:09:45.852716 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-01-16 15:09:45.852722 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:45.852732 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-01-16 15:09:48.865142 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-01-16 15:09:48.866303 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:48.866390 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-01-16 15:09:48.866416 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-01-16 15:09:48.866438 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:48.866460 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-01-16 15:09:48.866482 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-01-16 15:09:48.866504 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-01-16 15:09:48.866600 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-01-16 15:09:48.866625 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-01-16 15:09:48.866645 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-01-16 15:09:48.866664 | orchestrator | 2025-01-16 15:09:48.866685 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:09:48.866706 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-01-16 15:09:48.866727 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-01-16 15:09:48.866746 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-01-16 15:09:48.866764 | orchestrator | 2025-01-16 15:09:48.866782 | orchestrator | 2025-01-16 15:09:48.866800 | orchestrator | 2025-01-16 15:09:48.866819 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:09:48.866839 | orchestrator | Thursday 16 January 2025 15:09:45 +0000 (0:00:10.925) 0:01:29.507 ****** 2025-01-16 15:09:48.866859 | orchestrator | =============================================================================== 2025-01-16 15:09:48.866880 | orchestrator | create openstack pool(s) ----------------------------------------------- 31.40s 2025-01-16 15:09:48.866900 | orchestrator | generate keys ---------------------------------------------------------- 12.55s 2025-01-16 15:09:48.866920 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 10.93s 2025-01-16 15:09:48.866940 | orchestrator | get keys from monitors -------------------------------------------------- 6.26s 2025-01-16 15:09:48.866960 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.40s 2025-01-16 15:09:48.866980 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.15s 2025-01-16 15:09:48.867034 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.14s 2025-01-16 15:09:48.867055 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.91s 2025-01-16 15:09:48.867076 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.90s 2025-01-16 15:09:48.867096 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 0.89s 2025-01-16 15:09:48.867116 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 0.83s 2025-01-16 15:09:48.867136 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.64s 2025-01-16 15:09:48.867156 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.60s 2025-01-16 15:09:48.867176 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.59s 2025-01-16 15:09:48.867196 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.58s 2025-01-16 15:09:48.867242 | orchestrator | ceph-facts : set_fact devices generate device list when osd_auto_discovery --- 0.56s 2025-01-16 15:09:48.867263 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.55s 2025-01-16 15:09:48.867284 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.54s 2025-01-16 15:09:48.867305 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.50s 2025-01-16 15:09:48.867325 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.50s 2025-01-16 15:09:48.867369 | orchestrator | 2025-01-16 15:09:48 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:51.888648 | orchestrator | 2025-01-16 15:09:48 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:09:51.888774 | orchestrator | 2025-01-16 15:09:48 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:51.888808 | orchestrator | 2025-01-16 15:09:51 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:54.916233 | orchestrator | 2025-01-16 15:09:51 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:09:54.916390 | orchestrator | 2025-01-16 15:09:51 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:54.916433 | orchestrator | 2025-01-16 15:09:54 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:09:57.943430 | orchestrator | 2025-01-16 15:09:54 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:09:57.943644 | orchestrator | 2025-01-16 15:09:54 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:09:57.943863 | orchestrator | 2025-01-16 15:09:57 | INFO  | Task cdc9e25b-9369-471c-8001-0369ed03bd2f is in state STARTED 2025-01-16 15:10:00.964070 | orchestrator | 2025-01-16 15:09:57 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:10:00.964318 | orchestrator | 2025-01-16 15:09:57 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:10:00.964346 | orchestrator | 2025-01-16 15:09:57 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:00.964379 | orchestrator | 2025-01-16 15:10:00 | INFO  | Task cdc9e25b-9369-471c-8001-0369ed03bd2f is in state STARTED 2025-01-16 15:10:03.988910 | orchestrator | 2025-01-16 15:10:00 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:10:03.989062 | orchestrator | 2025-01-16 15:10:00 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:10:03.989093 | orchestrator | 2025-01-16 15:10:00 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:03.989158 | orchestrator | 2025-01-16 15:10:03 | INFO  | Task cdc9e25b-9369-471c-8001-0369ed03bd2f is in state STARTED 2025-01-16 15:10:07.010238 | orchestrator | 2025-01-16 15:10:03 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:10:07.010370 | orchestrator | 2025-01-16 15:10:03 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:10:07.010391 | orchestrator | 2025-01-16 15:10:03 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:07.010425 | orchestrator | 2025-01-16 15:10:07 | INFO  | Task cdc9e25b-9369-471c-8001-0369ed03bd2f is in state STARTED 2025-01-16 15:10:07.011302 | orchestrator | 2025-01-16 15:10:07 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state STARTED 2025-01-16 15:10:07.011341 | orchestrator | 2025-01-16 15:10:07 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:10:10.037788 | orchestrator | 2025-01-16 15:10:07 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:10.038202 | orchestrator | 2025-01-16 15:10:10 | INFO  | Task cdc9e25b-9369-471c-8001-0369ed03bd2f is in state STARTED 2025-01-16 15:10:10.039488 | orchestrator | 2025-01-16 15:10:10 | INFO  | Task c003f5ec-2aa2-4397-9f74-def75e1af845 is in state SUCCESS 2025-01-16 15:10:10.040103 | orchestrator | 2025-01-16 15:10:10.040127 | orchestrator | 2025-01-16 15:10:10.040142 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:10:10.040158 | orchestrator | 2025-01-16 15:10:10.040174 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:10:10.040189 | orchestrator | Thursday 16 January 2025 15:08:13 +0000 (0:00:00.207) 0:00:00.207 ****** 2025-01-16 15:10:10.040205 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:10.040221 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:10:10.040237 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:10:10.040252 | orchestrator | 2025-01-16 15:10:10.040268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:10:10.040283 | orchestrator | Thursday 16 January 2025 15:08:14 +0000 (0:00:00.257) 0:00:00.465 ****** 2025-01-16 15:10:10.040298 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-01-16 15:10:10.040313 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-01-16 15:10:10.040328 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-01-16 15:10:10.040343 | orchestrator | 2025-01-16 15:10:10.040358 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-01-16 15:10:10.040373 | orchestrator | 2025-01-16 15:10:10.040405 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-01-16 15:10:10.040421 | orchestrator | Thursday 16 January 2025 15:08:14 +0000 (0:00:00.201) 0:00:00.667 ****** 2025-01-16 15:10:10.040436 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:10:10.040452 | orchestrator | 2025-01-16 15:10:10.040469 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-01-16 15:10:10.040484 | orchestrator | Thursday 16 January 2025 15:08:14 +0000 (0:00:00.509) 0:00:01.176 ****** 2025-01-16 15:10:10.040505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.040577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.040650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.040679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.040704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.040729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.040767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.040793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.040818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.040843 | orchestrator | 2025-01-16 15:10:10.040868 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-01-16 15:10:10.040892 | orchestrator | Thursday 16 January 2025 15:08:16 +0000 (0:00:01.803) 0:00:02.979 ****** 2025-01-16 15:10:10.040918 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-01-16 15:10:10.040933 | orchestrator | 2025-01-16 15:10:10.040947 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-01-16 15:10:10.040961 | orchestrator | Thursday 16 January 2025 15:08:16 +0000 (0:00:00.393) 0:00:03.372 ****** 2025-01-16 15:10:10.040975 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:10.040990 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:10:10.041004 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:10:10.041018 | orchestrator | 2025-01-16 15:10:10.041032 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-01-16 15:10:10.041046 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.376) 0:00:03.749 ****** 2025-01-16 15:10:10.041060 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:10:10.041075 | orchestrator | 2025-01-16 15:10:10.041088 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-01-16 15:10:10.041102 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.256) 0:00:04.005 ****** 2025-01-16 15:10:10.041117 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:10:10.041130 | orchestrator | 2025-01-16 15:10:10.041144 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-01-16 15:10:10.041158 | orchestrator | Thursday 16 January 2025 15:08:17 +0000 (0:00:00.437) 0:00:04.443 ****** 2025-01-16 15:10:10.041173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.041200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.041216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.041264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.041289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.041313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.041328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.041342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.041357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.041371 | orchestrator | 2025-01-16 15:10:10.041385 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-01-16 15:10:10.041400 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:02.252) 0:00:06.695 ****** 2025-01-16 15:10:10.041429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-01-16 15:10:10.041452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.041467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:10:10.041482 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.041497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-01-16 15:10:10.041512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.041568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:10:10.041585 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:10.041605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-01-16 15:10:10.041628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.041642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:10:10.041657 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:10.041671 | orchestrator | 2025-01-16 15:10:10.041686 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-01-16 15:10:10.041700 | orchestrator | Thursday 16 January 2025 15:08:20 +0000 (0:00:00.545) 0:00:07.241 ****** 2025-01-16 15:10:10.041714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-01-16 15:10:10.041743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.041758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:10:10.041780 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:10.041794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-01-16 15:10:10.041809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.041824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:10:10.041838 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.041870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-01-16 15:10:10.041909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.041935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-01-16 15:10:10.041960 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:10.041976 | orchestrator | 2025-01-16 15:10:10.041990 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-01-16 15:10:10.042004 | orchestrator | Thursday 16 January 2025 15:08:21 +0000 (0:00:00.804) 0:00:08.045 ****** 2025-01-16 15:10:10.042071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.042102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.042129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.042154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.042169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.042183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.042197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.042217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.042238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.042260 | orchestrator | 2025-01-16 15:10:10.042280 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-01-16 15:10:10.042305 | orchestrator | Thursday 16 January 2025 15:08:23 +0000 (0:00:02.052) 0:00:10.098 ****** 2025-01-16 15:10:10.042330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.042349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.042363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.042384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.042414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.042445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.042470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.042485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.042505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.042520 | orchestrator | 2025-01-16 15:10:10.042568 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-01-16 15:10:10.042592 | orchestrator | Thursday 16 January 2025 15:08:27 +0000 (0:00:04.273) 0:00:14.371 ****** 2025-01-16 15:10:10.042615 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:10:10.042638 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:10:10.042673 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:10:10.042697 | orchestrator | 2025-01-16 15:10:10.042721 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-01-16 15:10:10.042744 | orchestrator | Thursday 16 January 2025 15:08:29 +0000 (0:00:01.281) 0:00:15.653 ****** 2025-01-16 15:10:10.042763 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.042778 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:10.042792 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:10.042806 | orchestrator | 2025-01-16 15:10:10.042820 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-01-16 15:10:10.042835 | orchestrator | Thursday 16 January 2025 15:08:30 +0000 (0:00:00.825) 0:00:16.478 ****** 2025-01-16 15:10:10.042849 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.042862 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:10.042876 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:10.042890 | orchestrator | 2025-01-16 15:10:10.042904 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-01-16 15:10:10.042918 | orchestrator | Thursday 16 January 2025 15:08:30 +0000 (0:00:00.268) 0:00:16.746 ****** 2025-01-16 15:10:10.042932 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.042954 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:10.042969 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:10.042983 | orchestrator | 2025-01-16 15:10:10.042997 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-01-16 15:10:10.043011 | orchestrator | Thursday 16 January 2025 15:08:30 +0000 (0:00:00.348) 0:00:17.095 ****** 2025-01-16 15:10:10.043026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.043042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.043057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.043081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.043115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.043131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-01-16 15:10:10.043146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.043160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.043175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.043195 | orchestrator | 2025-01-16 15:10:10.043210 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-01-16 15:10:10.043224 | orchestrator | Thursday 16 January 2025 15:08:32 +0000 (0:00:01.746) 0:00:18.842 ****** 2025-01-16 15:10:10.043238 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.043252 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:10.043266 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:10.043280 | orchestrator | 2025-01-16 15:10:10.043294 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-01-16 15:10:10.043309 | orchestrator | Thursday 16 January 2025 15:08:32 +0000 (0:00:00.391) 0:00:19.233 ****** 2025-01-16 15:10:10.043323 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-01-16 15:10:10.043338 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-01-16 15:10:10.043381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-01-16 15:10:10.043396 | orchestrator | 2025-01-16 15:10:10.043410 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-01-16 15:10:10.043424 | orchestrator | Thursday 16 January 2025 15:08:34 +0000 (0:00:02.126) 0:00:21.360 ****** 2025-01-16 15:10:10.043438 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:10:10.043460 | orchestrator | 2025-01-16 15:10:10.043482 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-01-16 15:10:10.043506 | orchestrator | Thursday 16 January 2025 15:08:35 +0000 (0:00:00.605) 0:00:21.965 ****** 2025-01-16 15:10:10.043563 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.043598 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:10.043623 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:10.043639 | orchestrator | 2025-01-16 15:10:10.043653 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-01-16 15:10:10.043666 | orchestrator | Thursday 16 January 2025 15:08:36 +0000 (0:00:00.810) 0:00:22.776 ****** 2025-01-16 15:10:10.043680 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-01-16 15:10:10.043694 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-01-16 15:10:10.043708 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:10:10.043721 | orchestrator | 2025-01-16 15:10:10.043735 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-01-16 15:10:10.043749 | orchestrator | Thursday 16 January 2025 15:08:36 +0000 (0:00:00.629) 0:00:23.405 ****** 2025-01-16 15:10:10.043763 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:10.043777 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:10:10.043791 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:10:10.043805 | orchestrator | 2025-01-16 15:10:10.043818 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-01-16 15:10:10.043832 | orchestrator | Thursday 16 January 2025 15:08:37 +0000 (0:00:00.252) 0:00:23.657 ****** 2025-01-16 15:10:10.043846 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-01-16 15:10:10.043860 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-01-16 15:10:10.043874 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-01-16 15:10:10.043888 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-01-16 15:10:10.043902 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-01-16 15:10:10.043926 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-01-16 15:10:10.043940 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-01-16 15:10:10.043955 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-01-16 15:10:10.043969 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-01-16 15:10:10.043983 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-01-16 15:10:10.043996 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-01-16 15:10:10.044010 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-01-16 15:10:10.044034 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-01-16 15:10:10.044057 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-01-16 15:10:10.044081 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-01-16 15:10:10.044104 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-01-16 15:10:10.044127 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-01-16 15:10:10.044150 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-01-16 15:10:10.044174 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-01-16 15:10:10.044198 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-01-16 15:10:10.044222 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-01-16 15:10:10.044242 | orchestrator | 2025-01-16 15:10:10.044257 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-01-16 15:10:10.044271 | orchestrator | Thursday 16 January 2025 15:08:45 +0000 (0:00:08.653) 0:00:32.311 ****** 2025-01-16 15:10:10.044285 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-01-16 15:10:10.044299 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-01-16 15:10:10.044313 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-01-16 15:10:10.044327 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-01-16 15:10:10.044341 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-01-16 15:10:10.044355 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-01-16 15:10:10.044369 | orchestrator | 2025-01-16 15:10:10.044383 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-01-16 15:10:10.044397 | orchestrator | Thursday 16 January 2025 15:08:47 +0000 (0:00:01.946) 0:00:34.258 ****** 2025-01-16 15:10:10.044423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.044448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.044463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-01-16 15:10:10.044518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.044559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.044583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-01-16 15:10:10.044606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.044621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.044635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-01-16 15:10:10.044650 | orchestrator | 2025-01-16 15:10:10.044664 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-01-16 15:10:10.044683 | orchestrator | Thursday 16 January 2025 15:08:49 +0000 (0:00:01.792) 0:00:36.050 ****** 2025-01-16 15:10:10.044697 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.044718 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:10.044743 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:10.044767 | orchestrator | 2025-01-16 15:10:10.044792 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-01-16 15:10:10.044808 | orchestrator | Thursday 16 January 2025 15:08:49 +0000 (0:00:00.178) 0:00:36.229 ****** 2025-01-16 15:10:10.044822 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:10:10.044836 | orchestrator | 2025-01-16 15:10:10.044850 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-01-16 15:10:10.044864 | orchestrator | Thursday 16 January 2025 15:08:51 +0000 (0:00:01.618) 0:00:37.847 ****** 2025-01-16 15:10:10.044878 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:10:10.044899 | orchestrator | 2025-01-16 15:10:10.044913 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-01-16 15:10:10.044927 | orchestrator | Thursday 16 January 2025 15:08:52 +0000 (0:00:01.524) 0:00:39.372 ****** 2025-01-16 15:10:10.044941 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:10.044955 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:10:10.044969 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:10:10.044983 | orchestrator | 2025-01-16 15:10:10.045005 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-01-16 15:10:10.045026 | orchestrator | Thursday 16 January 2025 15:08:53 +0000 (0:00:00.588) 0:00:39.960 ****** 2025-01-16 15:10:10.045050 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:10.045085 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:10:10.045101 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:10:10.045115 | orchestrator | 2025-01-16 15:10:10.045129 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-01-16 15:10:10.045143 | orchestrator | Thursday 16 January 2025 15:08:53 +0000 (0:00:00.211) 0:00:40.172 ****** 2025-01-16 15:10:10.045157 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.045171 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:10.045184 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:10.045198 | orchestrator | 2025-01-16 15:10:10.045212 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-01-16 15:10:10.045226 | orchestrator | Thursday 16 January 2025 15:08:54 +0000 (0:00:00.368) 0:00:40.540 ****** 2025-01-16 15:10:10.045244 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:10:10.045266 | orchestrator | 2025-01-16 15:10:10.045299 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-01-16 15:10:10.045323 | orchestrator | Thursday 16 January 2025 15:09:02 +0000 (0:00:08.161) 0:00:48.702 ****** 2025-01-16 15:10:10.045340 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:10:10.045354 | orchestrator | 2025-01-16 15:10:10.045368 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-01-16 15:10:10.045382 | orchestrator | Thursday 16 January 2025 15:09:07 +0000 (0:00:05.188) 0:00:53.890 ****** 2025-01-16 15:10:10.045396 | orchestrator | 2025-01-16 15:10:10.045410 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-01-16 15:10:10.045424 | orchestrator | Thursday 16 January 2025 15:09:07 +0000 (0:00:00.105) 0:00:53.995 ****** 2025-01-16 15:10:10.045438 | orchestrator | 2025-01-16 15:10:10.045452 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-01-16 15:10:10.045466 | orchestrator | Thursday 16 January 2025 15:09:07 +0000 (0:00:00.039) 0:00:54.035 ****** 2025-01-16 15:10:10.045488 | orchestrator | 2025-01-16 15:10:10.045513 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-01-16 15:10:10.045575 | orchestrator | Thursday 16 January 2025 15:09:07 +0000 (0:00:00.039) 0:00:54.075 ****** 2025-01-16 15:10:10.045591 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:10:10.045605 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:10:10.045619 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:10:10.045633 | orchestrator | 2025-01-16 15:10:10.045646 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-01-16 15:10:10.045660 | orchestrator | Thursday 16 January 2025 15:09:18 +0000 (0:00:10.957) 0:01:05.032 ****** 2025-01-16 15:10:10.045674 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:10:10.045687 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:10:10.045701 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:10:10.045715 | orchestrator | 2025-01-16 15:10:10.045728 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-01-16 15:10:10.045742 | orchestrator | Thursday 16 January 2025 15:09:26 +0000 (0:00:07.643) 0:01:12.676 ****** 2025-01-16 15:10:10.045756 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:10:10.045770 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:10:10.045783 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:10:10.045797 | orchestrator | 2025-01-16 15:10:10.045811 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-01-16 15:10:10.045825 | orchestrator | Thursday 16 January 2025 15:09:29 +0000 (0:00:02.866) 0:01:15.543 ****** 2025-01-16 15:10:10.045839 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:10:10.045853 | orchestrator | 2025-01-16 15:10:10.045866 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-01-16 15:10:10.045880 | orchestrator | Thursday 16 January 2025 15:09:29 +0000 (0:00:00.476) 0:01:16.020 ****** 2025-01-16 15:10:10.045894 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:10:10.045907 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:10.045930 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:10:10.045944 | orchestrator | 2025-01-16 15:10:10.045977 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-01-16 15:10:10.045998 | orchestrator | Thursday 16 January 2025 15:09:30 +0000 (0:00:00.655) 0:01:16.675 ****** 2025-01-16 15:10:10.046012 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:10:10.046107 | orchestrator | 2025-01-16 15:10:10.046122 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-01-16 15:10:10.046135 | orchestrator | Thursday 16 January 2025 15:09:31 +0000 (0:00:00.963) 0:01:17.639 ****** 2025-01-16 15:10:10.046149 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-01-16 15:10:10.046163 | orchestrator | 2025-01-16 15:10:10.046177 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-01-16 15:10:10.046191 | orchestrator | Thursday 16 January 2025 15:09:36 +0000 (0:00:05.493) 0:01:23.132 ****** 2025-01-16 15:10:10.046205 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-01-16 15:10:10.046218 | orchestrator | 2025-01-16 15:10:10.046232 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-01-16 15:10:10.046246 | orchestrator | Thursday 16 January 2025 15:09:50 +0000 (0:00:14.232) 0:01:37.364 ****** 2025-01-16 15:10:10.046260 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-01-16 15:10:10.046274 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-01-16 15:10:10.046288 | orchestrator | 2025-01-16 15:10:10.046302 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-01-16 15:10:10.046316 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:13.683) 0:01:51.048 ****** 2025-01-16 15:10:10.046330 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.046344 | orchestrator | 2025-01-16 15:10:10.046357 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-01-16 15:10:10.046371 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.072) 0:01:51.121 ****** 2025-01-16 15:10:10.046385 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.046399 | orchestrator | 2025-01-16 15:10:10.046412 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-01-16 15:10:10.046426 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.072) 0:01:51.193 ****** 2025-01-16 15:10:10.046440 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.046453 | orchestrator | 2025-01-16 15:10:10.046467 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-01-16 15:10:10.046481 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.075) 0:01:51.269 ****** 2025-01-16 15:10:10.046495 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:10.046509 | orchestrator | 2025-01-16 15:10:10.046522 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-01-16 15:10:10.046558 | orchestrator | Thursday 16 January 2025 15:10:05 +0000 (0:00:00.267) 0:01:51.536 ****** 2025-01-16 15:10:10.046572 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:10.046586 | orchestrator | 2025-01-16 15:10:10.046609 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-01-16 15:10:13.066871 | orchestrator | Thursday 16 January 2025 15:10:07 +0000 (0:00:02.218) 0:01:53.755 ****** 2025-01-16 15:10:13.066971 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:13.066981 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:10:13.066987 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:10:13.066992 | orchestrator | 2025-01-16 15:10:13.066999 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:10:13.067005 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-01-16 15:10:13.067012 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-01-16 15:10:13.067038 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-01-16 15:10:13.067044 | orchestrator | 2025-01-16 15:10:13.067050 | orchestrator | 2025-01-16 15:10:13.067055 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:10:13.067060 | orchestrator | Thursday 16 January 2025 15:10:07 +0000 (0:00:00.338) 0:01:54.094 ****** 2025-01-16 15:10:13.067066 | orchestrator | =============================================================================== 2025-01-16 15:10:13.067071 | orchestrator | service-ks-register : keystone | Creating services --------------------- 14.23s 2025-01-16 15:10:13.067076 | orchestrator | service-ks-register : keystone | Creating endpoints -------------------- 13.68s 2025-01-16 15:10:13.067081 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.96s 2025-01-16 15:10:13.067086 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.65s 2025-01-16 15:10:13.067092 | orchestrator | keystone : Running Keystone bootstrap container ------------------------- 8.16s 2025-01-16 15:10:13.067097 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.64s 2025-01-16 15:10:13.067102 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 5.49s 2025-01-16 15:10:13.067107 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 5.19s 2025-01-16 15:10:13.067112 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.27s 2025-01-16 15:10:13.067127 | orchestrator | keystone : Restart keystone container ----------------------------------- 2.87s 2025-01-16 15:10:13.067132 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.25s 2025-01-16 15:10:13.067138 | orchestrator | keystone : Creating default user role ----------------------------------- 2.22s 2025-01-16 15:10:13.067147 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.13s 2025-01-16 15:10:13.067155 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.05s 2025-01-16 15:10:13.067163 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 1.95s 2025-01-16 15:10:13.067170 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.80s 2025-01-16 15:10:13.067178 | orchestrator | keystone : Check keystone containers ------------------------------------ 1.79s 2025-01-16 15:10:13.067185 | orchestrator | keystone : Copying over existing policy file ---------------------------- 1.75s 2025-01-16 15:10:13.067193 | orchestrator | keystone : Creating keystone database ----------------------------------- 1.62s 2025-01-16 15:10:13.067201 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 1.52s 2025-01-16 15:10:13.067210 | orchestrator | 2025-01-16 15:10:10 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:13.067323 | orchestrator | 2025-01-16 15:10:10 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:13.067331 | orchestrator | 2025-01-16 15:10:10 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:13.067336 | orchestrator | 2025-01-16 15:10:10 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:10:13.067342 | orchestrator | 2025-01-16 15:10:10 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:13.067347 | orchestrator | 2025-01-16 15:10:10 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:13.067375 | orchestrator | 2025-01-16 15:10:13 | INFO  | Task cdc9e25b-9369-471c-8001-0369ed03bd2f is in state STARTED 2025-01-16 15:10:13.068380 | orchestrator | 2025-01-16 15:10:13 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:13.068418 | orchestrator | 2025-01-16 15:10:13 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:13.068442 | orchestrator | 2025-01-16 15:10:13 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:13.068628 | orchestrator | 2025-01-16 15:10:13 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:10:13.069358 | orchestrator | 2025-01-16 15:10:13 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:16.090437 | orchestrator | 2025-01-16 15:10:13 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:16.090723 | orchestrator | 2025-01-16 15:10:16.090747 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-01-16 15:10:16.090760 | orchestrator | 2025-01-16 15:10:16.090774 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-01-16 15:10:16.090786 | orchestrator | 2025-01-16 15:10:16.090799 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-01-16 15:10:16.090812 | orchestrator | Thursday 16 January 2025 15:09:57 +0000 (0:00:00.323) 0:00:00.323 ****** 2025-01-16 15:10:16.090824 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-01-16 15:10:16.090838 | orchestrator | 2025-01-16 15:10:16.090851 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-01-16 15:10:16.090864 | orchestrator | Thursday 16 January 2025 15:09:57 +0000 (0:00:00.134) 0:00:00.457 ****** 2025-01-16 15:10:16.090877 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:10:16.090890 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-01-16 15:10:16.090903 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-01-16 15:10:16.090916 | orchestrator | 2025-01-16 15:10:16.090929 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-01-16 15:10:16.090941 | orchestrator | Thursday 16 January 2025 15:09:58 +0000 (0:00:00.565) 0:00:01.022 ****** 2025-01-16 15:10:16.090954 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-01-16 15:10:16.090966 | orchestrator | 2025-01-16 15:10:16.090978 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-01-16 15:10:16.090990 | orchestrator | Thursday 16 January 2025 15:09:58 +0000 (0:00:00.148) 0:00:01.171 ****** 2025-01-16 15:10:16.091002 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.091015 | orchestrator | 2025-01-16 15:10:16.091027 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-01-16 15:10:16.091040 | orchestrator | Thursday 16 January 2025 15:09:58 +0000 (0:00:00.391) 0:00:01.563 ****** 2025-01-16 15:10:16.091052 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.091064 | orchestrator | 2025-01-16 15:10:16.091077 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-01-16 15:10:16.091089 | orchestrator | Thursday 16 January 2025 15:09:58 +0000 (0:00:00.072) 0:00:01.636 ****** 2025-01-16 15:10:16.091101 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.091204 | orchestrator | 2025-01-16 15:10:16.091219 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-01-16 15:10:16.091236 | orchestrator | Thursday 16 January 2025 15:09:59 +0000 (0:00:00.273) 0:00:01.909 ****** 2025-01-16 15:10:16.091249 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.091261 | orchestrator | 2025-01-16 15:10:16.091274 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-01-16 15:10:16.091286 | orchestrator | Thursday 16 January 2025 15:09:59 +0000 (0:00:00.083) 0:00:01.993 ****** 2025-01-16 15:10:16.091299 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.091311 | orchestrator | 2025-01-16 15:10:16.091323 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-01-16 15:10:16.091335 | orchestrator | Thursday 16 January 2025 15:09:59 +0000 (0:00:00.079) 0:00:02.072 ****** 2025-01-16 15:10:16.091348 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.091360 | orchestrator | 2025-01-16 15:10:16.091388 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-01-16 15:10:16.091421 | orchestrator | Thursday 16 January 2025 15:09:59 +0000 (0:00:00.087) 0:00:02.159 ****** 2025-01-16 15:10:16.091434 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.091447 | orchestrator | 2025-01-16 15:10:16.091460 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-01-16 15:10:16.091557 | orchestrator | Thursday 16 January 2025 15:09:59 +0000 (0:00:00.085) 0:00:02.244 ****** 2025-01-16 15:10:16.091571 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.091584 | orchestrator | 2025-01-16 15:10:16.091596 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-01-16 15:10:16.091608 | orchestrator | Thursday 16 January 2025 15:09:59 +0000 (0:00:00.078) 0:00:02.323 ****** 2025-01-16 15:10:16.091621 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:10:16.091757 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:10:16.091772 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:10:16.091785 | orchestrator | 2025-01-16 15:10:16.091797 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-01-16 15:10:16.091810 | orchestrator | Thursday 16 January 2025 15:10:00 +0000 (0:00:00.682) 0:00:03.005 ****** 2025-01-16 15:10:16.091822 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.091835 | orchestrator | 2025-01-16 15:10:16.091847 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-01-16 15:10:16.091860 | orchestrator | Thursday 16 January 2025 15:10:00 +0000 (0:00:00.181) 0:00:03.186 ****** 2025-01-16 15:10:16.091872 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:10:16.091885 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:10:16.091897 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:10:16.091910 | orchestrator | 2025-01-16 15:10:16.091922 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-01-16 15:10:16.091935 | orchestrator | Thursday 16 January 2025 15:10:01 +0000 (0:00:01.243) 0:00:04.430 ****** 2025-01-16 15:10:16.091948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:10:16.091961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:10:16.091973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:10:16.091986 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.091999 | orchestrator | 2025-01-16 15:10:16.092011 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-01-16 15:10:16.092038 | orchestrator | Thursday 16 January 2025 15:10:01 +0000 (0:00:00.288) 0:00:04.718 ****** 2025-01-16 15:10:16.092052 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-01-16 15:10:16.092068 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-01-16 15:10:16.092081 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-01-16 15:10:16.092094 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.092106 | orchestrator | 2025-01-16 15:10:16.092119 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-01-16 15:10:16.092131 | orchestrator | Thursday 16 January 2025 15:10:02 +0000 (0:00:00.448) 0:00:05.167 ****** 2025-01-16 15:10:16.092151 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-01-16 15:10:16.092182 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-01-16 15:10:16.092195 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-01-16 15:10:16.092208 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.092221 | orchestrator | 2025-01-16 15:10:16.092234 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-01-16 15:10:16.092247 | orchestrator | Thursday 16 January 2025 15:10:02 +0000 (0:00:00.104) 0:00:05.272 ****** 2025-01-16 15:10:16.092262 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '439665be0bb2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-01-16 15:10:00.749673', 'end': '2025-01-16 15:10:00.770068', 'delta': '0:00:00.020395', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['439665be0bb2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-01-16 15:10:16.092310 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '72057891a3d7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-01-16 15:10:01.106537', 'end': '2025-01-16 15:10:01.130012', 'delta': '0:00:00.023475', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72057891a3d7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-01-16 15:10:16.092336 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'd38109367755', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-01-16 15:10:01.458132', 'end': '2025-01-16 15:10:01.478264', 'delta': '0:00:00.020132', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d38109367755'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-01-16 15:10:16.092349 | orchestrator | 2025-01-16 15:10:16.092362 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-01-16 15:10:16.092374 | orchestrator | Thursday 16 January 2025 15:10:02 +0000 (0:00:00.123) 0:00:05.396 ****** 2025-01-16 15:10:16.092397 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.092411 | orchestrator | 2025-01-16 15:10:16.092429 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-01-16 15:10:16.092444 | orchestrator | Thursday 16 January 2025 15:10:02 +0000 (0:00:00.169) 0:00:05.566 ****** 2025-01-16 15:10:16.092459 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-01-16 15:10:16.092472 | orchestrator | 2025-01-16 15:10:16.092486 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-01-16 15:10:16.092501 | orchestrator | Thursday 16 January 2025 15:10:03 +0000 (0:00:00.865) 0:00:06.431 ****** 2025-01-16 15:10:16.092515 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.092592 | orchestrator | 2025-01-16 15:10:16.092607 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-01-16 15:10:16.092619 | orchestrator | Thursday 16 January 2025 15:10:03 +0000 (0:00:00.175) 0:00:06.606 ****** 2025-01-16 15:10:16.092632 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.092644 | orchestrator | 2025-01-16 15:10:16.092657 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-01-16 15:10:16.092674 | orchestrator | Thursday 16 January 2025 15:10:03 +0000 (0:00:00.149) 0:00:06.756 ****** 2025-01-16 15:10:16.092687 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.092699 | orchestrator | 2025-01-16 15:10:16.092711 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-01-16 15:10:16.092724 | orchestrator | Thursday 16 January 2025 15:10:03 +0000 (0:00:00.080) 0:00:06.836 ****** 2025-01-16 15:10:16.092736 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.092748 | orchestrator | 2025-01-16 15:10:16.092761 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-01-16 15:10:16.092773 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.090) 0:00:06.927 ****** 2025-01-16 15:10:16.092785 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.092798 | orchestrator | 2025-01-16 15:10:16.092810 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-01-16 15:10:16.092822 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.144) 0:00:07.071 ****** 2025-01-16 15:10:16.092834 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.092852 | orchestrator | 2025-01-16 15:10:16.092864 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-01-16 15:10:16.092877 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.079) 0:00:07.151 ****** 2025-01-16 15:10:16.092889 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.092901 | orchestrator | 2025-01-16 15:10:16.092913 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-01-16 15:10:16.092926 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.081) 0:00:07.232 ****** 2025-01-16 15:10:16.092938 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.092951 | orchestrator | 2025-01-16 15:10:16.092963 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-01-16 15:10:16.092976 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.077) 0:00:07.309 ****** 2025-01-16 15:10:16.092988 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093001 | orchestrator | 2025-01-16 15:10:16.093013 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-01-16 15:10:16.093025 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.079) 0:00:07.389 ****** 2025-01-16 15:10:16.093037 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093049 | orchestrator | 2025-01-16 15:10:16.093061 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-01-16 15:10:16.093074 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.083) 0:00:07.473 ****** 2025-01-16 15:10:16.093086 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093098 | orchestrator | 2025-01-16 15:10:16.093111 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-01-16 15:10:16.093123 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.079) 0:00:07.552 ****** 2025-01-16 15:10:16.093143 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093156 | orchestrator | 2025-01-16 15:10:16.093168 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-01-16 15:10:16.093180 | orchestrator | Thursday 16 January 2025 15:10:04 +0000 (0:00:00.086) 0:00:07.639 ****** 2025-01-16 15:10:16.093193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:10:16.093214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:10:16.093228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:10:16.093241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:10:16.093254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:10:16.093272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:10:16.093285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:10:16.093297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-01-16 15:10:16.093332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_7deb246b-94c7-4ccf-88e4-d5863b7b5cdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:10:16.093355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32889f36-f55b-4b84-b5ce-98c4b6c26bc3', 'scsi-SQEMU_QEMU_HARDDISK_32889f36-f55b-4b84-b5ce-98c4b6c26bc3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:10:16.093370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8ee2823-701b-4f46-84dc-c0a96e4e2751', 'scsi-SQEMU_QEMU_HARDDISK_a8ee2823-701b-4f46-84dc-c0a96e4e2751'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:10:16.093383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08ea27e5-46c9-491a-b107-4789383846f8', 'scsi-SQEMU_QEMU_HARDDISK_08ea27e5-46c9-491a-b107-4789383846f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:10:16.093397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-01-16-14-28-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-01-16 15:10:16.093421 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093434 | orchestrator | 2025-01-16 15:10:16.093447 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-01-16 15:10:16.093459 | orchestrator | Thursday 16 January 2025 15:10:05 +0000 (0:00:00.288) 0:00:07.928 ****** 2025-01-16 15:10:16.093472 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093484 | orchestrator | 2025-01-16 15:10:16.093497 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-01-16 15:10:16.093509 | orchestrator | Thursday 16 January 2025 15:10:05 +0000 (0:00:00.171) 0:00:08.099 ****** 2025-01-16 15:10:16.093522 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093579 | orchestrator | 2025-01-16 15:10:16.093593 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-01-16 15:10:16.093605 | orchestrator | Thursday 16 January 2025 15:10:05 +0000 (0:00:00.079) 0:00:08.179 ****** 2025-01-16 15:10:16.093618 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093630 | orchestrator | 2025-01-16 15:10:16.093643 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-01-16 15:10:16.093656 | orchestrator | Thursday 16 January 2025 15:10:05 +0000 (0:00:00.085) 0:00:08.264 ****** 2025-01-16 15:10:16.093674 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.093688 | orchestrator | 2025-01-16 15:10:16.093700 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-01-16 15:10:16.093713 | orchestrator | Thursday 16 January 2025 15:10:05 +0000 (0:00:00.304) 0:00:08.569 ****** 2025-01-16 15:10:16.093726 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.093738 | orchestrator | 2025-01-16 15:10:16.093751 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-01-16 15:10:16.093768 | orchestrator | Thursday 16 January 2025 15:10:05 +0000 (0:00:00.077) 0:00:08.646 ****** 2025-01-16 15:10:16.093781 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.093793 | orchestrator | 2025-01-16 15:10:16.093806 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-01-16 15:10:16.093818 | orchestrator | Thursday 16 January 2025 15:10:06 +0000 (0:00:00.308) 0:00:08.955 ****** 2025-01-16 15:10:16.093830 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.093843 | orchestrator | 2025-01-16 15:10:16.093855 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-01-16 15:10:16.093868 | orchestrator | Thursday 16 January 2025 15:10:06 +0000 (0:00:00.086) 0:00:09.042 ****** 2025-01-16 15:10:16.093880 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093892 | orchestrator | 2025-01-16 15:10:16.093905 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-01-16 15:10:16.093917 | orchestrator | Thursday 16 January 2025 15:10:06 +0000 (0:00:00.151) 0:00:09.193 ****** 2025-01-16 15:10:16.093930 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.093942 | orchestrator | 2025-01-16 15:10:16.093955 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-01-16 15:10:16.093967 | orchestrator | Thursday 16 January 2025 15:10:06 +0000 (0:00:00.090) 0:00:09.283 ****** 2025-01-16 15:10:16.093979 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:10:16.093992 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:10:16.094005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:10:16.094105 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.094122 | orchestrator | 2025-01-16 15:10:16.094135 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-01-16 15:10:16.094156 | orchestrator | Thursday 16 January 2025 15:10:06 +0000 (0:00:00.524) 0:00:09.808 ****** 2025-01-16 15:10:16.094169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:10:16.094181 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:10:16.094194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:10:16.094206 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.094219 | orchestrator | 2025-01-16 15:10:16.094232 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-01-16 15:10:16.094244 | orchestrator | Thursday 16 January 2025 15:10:07 +0000 (0:00:00.306) 0:00:10.115 ****** 2025-01-16 15:10:16.094257 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:10:16.094269 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-01-16 15:10:16.094282 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-01-16 15:10:16.094294 | orchestrator | 2025-01-16 15:10:16.094306 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-01-16 15:10:16.094319 | orchestrator | Thursday 16 January 2025 15:10:07 +0000 (0:00:00.717) 0:00:10.832 ****** 2025-01-16 15:10:16.094331 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:10:16.094344 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:10:16.094356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:10:16.094369 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.094381 | orchestrator | 2025-01-16 15:10:16.094394 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-01-16 15:10:16.094406 | orchestrator | Thursday 16 January 2025 15:10:08 +0000 (0:00:00.139) 0:00:10.971 ****** 2025-01-16 15:10:16.094419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-01-16 15:10:16.094431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-01-16 15:10:16.094444 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-01-16 15:10:16.094456 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.094469 | orchestrator | 2025-01-16 15:10:16.094481 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-01-16 15:10:16.094494 | orchestrator | Thursday 16 January 2025 15:10:08 +0000 (0:00:00.149) 0:00:11.121 ****** 2025-01-16 15:10:16.094506 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-01-16 15:10:16.094519 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-01-16 15:10:16.094563 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-01-16 15:10:16.094582 | orchestrator | 2025-01-16 15:10:16.094600 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-01-16 15:10:16.094620 | orchestrator | Thursday 16 January 2025 15:10:08 +0000 (0:00:00.125) 0:00:11.246 ****** 2025-01-16 15:10:16.094640 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.094662 | orchestrator | 2025-01-16 15:10:16.094684 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-01-16 15:10:16.094704 | orchestrator | Thursday 16 January 2025 15:10:08 +0000 (0:00:00.086) 0:00:11.333 ****** 2025-01-16 15:10:16.094724 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:10:16.094750 | orchestrator | 2025-01-16 15:10:16.094763 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-01-16 15:10:16.094776 | orchestrator | Thursday 16 January 2025 15:10:08 +0000 (0:00:00.085) 0:00:11.418 ****** 2025-01-16 15:10:16.094788 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:10:16.094810 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:10:16.094824 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:10:16.094836 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-01-16 15:10:16.094857 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-01-16 15:10:16.094869 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-01-16 15:10:16.094882 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-01-16 15:10:16.094894 | orchestrator | 2025-01-16 15:10:16.094912 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-01-16 15:10:16.094925 | orchestrator | Thursday 16 January 2025 15:10:09 +0000 (0:00:00.793) 0:00:12.211 ****** 2025-01-16 15:10:16.094938 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:10:16.094951 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-01-16 15:10:16.094963 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-01-16 15:10:16.094976 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-01-16 15:10:16.094988 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-01-16 15:10:16.095001 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-01-16 15:10:16.095013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-01-16 15:10:16.095025 | orchestrator | 2025-01-16 15:10:16.095037 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-01-16 15:10:16.095050 | orchestrator | Thursday 16 January 2025 15:10:10 +0000 (0:00:01.462) 0:00:13.674 ****** 2025-01-16 15:10:16.095062 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:10:16.095075 | orchestrator | 2025-01-16 15:10:16.095087 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-01-16 15:10:16.095099 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.330) 0:00:14.004 ****** 2025-01-16 15:10:16.095112 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:10:16.095124 | orchestrator | 2025-01-16 15:10:16.095136 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-01-16 15:10:16.095149 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.435) 0:00:14.439 ****** 2025-01-16 15:10:16.095162 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-01-16 15:10:16.095174 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-01-16 15:10:16.095186 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-01-16 15:10:16.095198 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-01-16 15:10:16.095210 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-01-16 15:10:16.095223 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-01-16 15:10:16.095235 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-01-16 15:10:16.095247 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-01-16 15:10:16.095260 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-01-16 15:10:16.095272 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-01-16 15:10:16.095284 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-01-16 15:10:16.095296 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-01-16 15:10:16.095309 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-01-16 15:10:16.095382 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-01-16 15:10:16.095397 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-01-16 15:10:16.095418 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-01-16 15:10:16.095430 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-01-16 15:10:16.095442 | orchestrator | 2025-01-16 15:10:16.095455 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:10:16.095468 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-01-16 15:10:16.095482 | orchestrator | 2025-01-16 15:10:16.095494 | orchestrator | 2025-01-16 15:10:16.095507 | orchestrator | 2025-01-16 15:10:16.095520 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:10:16.095548 | orchestrator | Thursday 16 January 2025 15:10:15 +0000 (0:00:04.104) 0:00:18.544 ****** 2025-01-16 15:10:16.095567 | orchestrator | =============================================================================== 2025-01-16 15:10:16.095580 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 4.10s 2025-01-16 15:10:16.095593 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.46s 2025-01-16 15:10:16.095606 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.24s 2025-01-16 15:10:16.095626 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 0.87s 2025-01-16 15:10:19.117034 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.79s 2025-01-16 15:10:19.117187 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 0.72s 2025-01-16 15:10:19.117220 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2025-01-16 15:10:19.117246 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.57s 2025-01-16 15:10:19.117271 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.52s 2025-01-16 15:10:19.117296 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.45s 2025-01-16 15:10:19.117321 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.44s 2025-01-16 15:10:19.117377 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.39s 2025-01-16 15:10:19.117401 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.33s 2025-01-16 15:10:19.117423 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.31s 2025-01-16 15:10:19.117445 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.31s 2025-01-16 15:10:19.117468 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.30s 2025-01-16 15:10:19.117491 | orchestrator | ceph-facts : set_fact devices generate device list when osd_auto_discovery --- 0.29s 2025-01-16 15:10:19.117517 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.29s 2025-01-16 15:10:19.117571 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.27s 2025-01-16 15:10:19.117744 | orchestrator | ceph-facts : set_fact container_exec_cmd -------------------------------- 0.18s 2025-01-16 15:10:19.117770 | orchestrator | 2025-01-16 15:10:16 | INFO  | Task cdc9e25b-9369-471c-8001-0369ed03bd2f is in state SUCCESS 2025-01-16 15:10:19.117797 | orchestrator | 2025-01-16 15:10:16 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:19.117824 | orchestrator | 2025-01-16 15:10:16 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:19.117851 | orchestrator | 2025-01-16 15:10:16 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:19.117879 | orchestrator | 2025-01-16 15:10:16 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:10:19.117906 | orchestrator | 2025-01-16 15:10:16 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:19.117973 | orchestrator | 2025-01-16 15:10:16 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:19.118249 | orchestrator | 2025-01-16 15:10:19 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:19.120025 | orchestrator | 2025-01-16 15:10:19 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:19.120122 | orchestrator | 2025-01-16 15:10:19 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:19.120141 | orchestrator | 2025-01-16 15:10:19 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state STARTED 2025-01-16 15:10:19.120171 | orchestrator | 2025-01-16 15:10:19 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:22.142739 | orchestrator | 2025-01-16 15:10:19 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:22.142849 | orchestrator | 2025-01-16 15:10:22 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:22.144300 | orchestrator | 2025-01-16 15:10:22 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:22.144699 | orchestrator | 2025-01-16 15:10:22 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:22.147878 | orchestrator | 2025-01-16 15:10:22 | INFO  | Task 17d1511d-b7f0-408c-a2e7-0923ccaed05f is in state SUCCESS 2025-01-16 15:10:22.148019 | orchestrator | 2025-01-16 15:10:22 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:22.148081 | orchestrator | 2025-01-16 15:10:22 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:25.172834 | orchestrator | 2025-01-16 15:10:25 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:25.173148 | orchestrator | 2025-01-16 15:10:25 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:25.173188 | orchestrator | 2025-01-16 15:10:25 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:25.173683 | orchestrator | 2025-01-16 15:10:25 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:25.174220 | orchestrator | 2025-01-16 15:10:25 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:28.202301 | orchestrator | 2025-01-16 15:10:25 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:28.202426 | orchestrator | 2025-01-16 15:10:28 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:28.202631 | orchestrator | 2025-01-16 15:10:28 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:28.202652 | orchestrator | 2025-01-16 15:10:28 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:28.203169 | orchestrator | 2025-01-16 15:10:28 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:28.203654 | orchestrator | 2025-01-16 15:10:28 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:31.229719 | orchestrator | 2025-01-16 15:10:28 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:31.230234 | orchestrator | 2025-01-16 15:10:31 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:31.232513 | orchestrator | 2025-01-16 15:10:31 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:31.232701 | orchestrator | 2025-01-16 15:10:31 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:31.232743 | orchestrator | 2025-01-16 15:10:31 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:34.262873 | orchestrator | 2025-01-16 15:10:31 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:34.263151 | orchestrator | 2025-01-16 15:10:31 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:34.263217 | orchestrator | 2025-01-16 15:10:34 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:34.263709 | orchestrator | 2025-01-16 15:10:34 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:34.263766 | orchestrator | 2025-01-16 15:10:34 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:34.263788 | orchestrator | 2025-01-16 15:10:34 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:34.263819 | orchestrator | 2025-01-16 15:10:34 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:37.300104 | orchestrator | 2025-01-16 15:10:34 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:37.300207 | orchestrator | 2025-01-16 15:10:37 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:37.300819 | orchestrator | 2025-01-16 15:10:37 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:37.302430 | orchestrator | 2025-01-16 15:10:37 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:37.302731 | orchestrator | 2025-01-16 15:10:37 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:37.302771 | orchestrator | 2025-01-16 15:10:37 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:37.302793 | orchestrator | 2025-01-16 15:10:37 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:40.326739 | orchestrator | 2025-01-16 15:10:40 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:40.327402 | orchestrator | 2025-01-16 15:10:40 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:40.329417 | orchestrator | 2025-01-16 15:10:40 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:40.330614 | orchestrator | 2025-01-16 15:10:40 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:40.330943 | orchestrator | 2025-01-16 15:10:40 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:40.331107 | orchestrator | 2025-01-16 15:10:40 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:43.365318 | orchestrator | 2025-01-16 15:10:43 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:43.366362 | orchestrator | 2025-01-16 15:10:43 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:43.366421 | orchestrator | 2025-01-16 15:10:43 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:43.369895 | orchestrator | 2025-01-16 15:10:43 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:46.402679 | orchestrator | 2025-01-16 15:10:43 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:46.402797 | orchestrator | 2025-01-16 15:10:43 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:46.402829 | orchestrator | 2025-01-16 15:10:46 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:46.403174 | orchestrator | 2025-01-16 15:10:46 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:46.403854 | orchestrator | 2025-01-16 15:10:46 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:46.404885 | orchestrator | 2025-01-16 15:10:46 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:46.405323 | orchestrator | 2025-01-16 15:10:46 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:46.405407 | orchestrator | 2025-01-16 15:10:46 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:49.434080 | orchestrator | 2025-01-16 15:10:49 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:49.435394 | orchestrator | 2025-01-16 15:10:49 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:49.435675 | orchestrator | 2025-01-16 15:10:49 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:49.436077 | orchestrator | 2025-01-16 15:10:49 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:49.436119 | orchestrator | 2025-01-16 15:10:49 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:52.459599 | orchestrator | 2025-01-16 15:10:49 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:52.459703 | orchestrator | 2025-01-16 15:10:52 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:52.459861 | orchestrator | 2025-01-16 15:10:52 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:52.459873 | orchestrator | 2025-01-16 15:10:52 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:52.459881 | orchestrator | 2025-01-16 15:10:52 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:52.460356 | orchestrator | 2025-01-16 15:10:52 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:55.484104 | orchestrator | 2025-01-16 15:10:52 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:55.484241 | orchestrator | 2025-01-16 15:10:55 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:55.484413 | orchestrator | 2025-01-16 15:10:55 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:55.485029 | orchestrator | 2025-01-16 15:10:55 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:55.485868 | orchestrator | 2025-01-16 15:10:55 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:55.486407 | orchestrator | 2025-01-16 15:10:55 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:10:58.507876 | orchestrator | 2025-01-16 15:10:55 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:10:58.508029 | orchestrator | 2025-01-16 15:10:58 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:10:58.508246 | orchestrator | 2025-01-16 15:10:58 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:10:58.508280 | orchestrator | 2025-01-16 15:10:58 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:10:58.508324 | orchestrator | 2025-01-16 15:10:58 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:10:58.508361 | orchestrator | 2025-01-16 15:10:58 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:01.533248 | orchestrator | 2025-01-16 15:10:58 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:01.533391 | orchestrator | 2025-01-16 15:11:01 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:11:01.533657 | orchestrator | 2025-01-16 15:11:01 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:01.533679 | orchestrator | 2025-01-16 15:11:01 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:01.533690 | orchestrator | 2025-01-16 15:11:01 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:01.533985 | orchestrator | 2025-01-16 15:11:01 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:04.563724 | orchestrator | 2025-01-16 15:11:01 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:04.564718 | orchestrator | 2025-01-16 15:11:04 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:11:04.565456 | orchestrator | 2025-01-16 15:11:04 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:04.565481 | orchestrator | 2025-01-16 15:11:04 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:04.565489 | orchestrator | 2025-01-16 15:11:04 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:04.565502 | orchestrator | 2025-01-16 15:11:04 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:07.598937 | orchestrator | 2025-01-16 15:11:04 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:07.599180 | orchestrator | 2025-01-16 15:11:07 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:11:07.599627 | orchestrator | 2025-01-16 15:11:07 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:07.599657 | orchestrator | 2025-01-16 15:11:07 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:07.599672 | orchestrator | 2025-01-16 15:11:07 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:07.599693 | orchestrator | 2025-01-16 15:11:07 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:10.624941 | orchestrator | 2025-01-16 15:11:07 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:10.625066 | orchestrator | 2025-01-16 15:11:10 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:11:10.627130 | orchestrator | 2025-01-16 15:11:10 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:13.651673 | orchestrator | 2025-01-16 15:11:10 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:13.651777 | orchestrator | 2025-01-16 15:11:10 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:13.651787 | orchestrator | 2025-01-16 15:11:10 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:13.651796 | orchestrator | 2025-01-16 15:11:10 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:13.651818 | orchestrator | 2025-01-16 15:11:13 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:11:13.651977 | orchestrator | 2025-01-16 15:11:13 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:13.651992 | orchestrator | 2025-01-16 15:11:13 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:13.652552 | orchestrator | 2025-01-16 15:11:13 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:13.653152 | orchestrator | 2025-01-16 15:11:13 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:13.654914 | orchestrator | 2025-01-16 15:11:13 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:16.689825 | orchestrator | 2025-01-16 15:11:16 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state STARTED 2025-01-16 15:11:16.690260 | orchestrator | 2025-01-16 15:11:16 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:16.690906 | orchestrator | 2025-01-16 15:11:16 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:16.690945 | orchestrator | 2025-01-16 15:11:16 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:16.694465 | orchestrator | 2025-01-16 15:11:16 | INFO  | Task 1fd2c389-2b57-4b7e-8c29-5c0f5ee45cfb is in state STARTED 2025-01-16 15:11:16.694573 | orchestrator | 2025-01-16 15:11:16 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:19.727390 | orchestrator | 2025-01-16 15:11:16 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:19.727583 | orchestrator | 2025-01-16 15:11:19.727608 | orchestrator | 2025-01-16 15:11:19.727624 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-01-16 15:11:19.727639 | orchestrator | 2025-01-16 15:11:19.727653 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-01-16 15:11:19.727667 | orchestrator | Thursday 16 January 2025 15:09:48 +0000 (0:00:00.941) 0:00:00.941 ****** 2025-01-16 15:11:19.727681 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-01-16 15:11:19.727695 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-01-16 15:11:19.727708 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-01-16 15:11:19.727722 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-01-16 15:11:19.727736 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-01-16 15:11:19.727749 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-01-16 15:11:19.727765 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-01-16 15:11:19.727779 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-01-16 15:11:19.727809 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-01-16 15:11:19.727824 | orchestrator | 2025-01-16 15:11:19.727838 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-01-16 15:11:19.727852 | orchestrator | Thursday 16 January 2025 15:09:51 +0000 (0:00:03.050) 0:00:03.991 ****** 2025-01-16 15:11:19.727865 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-01-16 15:11:19.727879 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-01-16 15:11:19.727893 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-01-16 15:11:19.727906 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-01-16 15:11:19.727920 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-01-16 15:11:19.727935 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-01-16 15:11:19.727950 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-01-16 15:11:19.727966 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-01-16 15:11:19.727981 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-01-16 15:11:19.727996 | orchestrator | 2025-01-16 15:11:19.728012 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-01-16 15:11:19.728051 | orchestrator | Thursday 16 January 2025 15:09:52 +0000 (0:00:00.817) 0:00:04.809 ****** 2025-01-16 15:11:19.728067 | orchestrator | ok: [testbed-manager] => { 2025-01-16 15:11:19.728086 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-01-16 15:11:19.728103 | orchestrator | } 2025-01-16 15:11:19.728119 | orchestrator | 2025-01-16 15:11:19.728135 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-01-16 15:11:19.728150 | orchestrator | Thursday 16 January 2025 15:09:53 +0000 (0:00:00.782) 0:00:05.591 ****** 2025-01-16 15:11:19.728165 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:19.728181 | orchestrator | 2025-01-16 15:11:19.728197 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-01-16 15:11:19.728212 | orchestrator | Thursday 16 January 2025 15:10:16 +0000 (0:00:23.129) 0:00:28.721 ****** 2025-01-16 15:11:19.728228 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-01-16 15:11:19.728244 | orchestrator | 2025-01-16 15:11:19.728259 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-01-16 15:11:19.728275 | orchestrator | Thursday 16 January 2025 15:10:17 +0000 (0:00:00.989) 0:00:29.711 ****** 2025-01-16 15:11:19.728293 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-01-16 15:11:19.728310 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-01-16 15:11:19.728325 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-01-16 15:11:19.728340 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-01-16 15:11:19.728354 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-01-16 15:11:19.728379 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-01-16 15:11:19.728918 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-01-16 15:11:19.728949 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-01-16 15:11:19.728963 | orchestrator | 2025-01-16 15:11:19.728978 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-01-16 15:11:19.728992 | orchestrator | Thursday 16 January 2025 15:10:20 +0000 (0:00:02.371) 0:00:32.082 ****** 2025-01-16 15:11:19.729006 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:11:19.729020 | orchestrator | 2025-01-16 15:11:19.729034 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:11:19.729049 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 15:11:19.729063 | orchestrator | 2025-01-16 15:11:19.729077 | orchestrator | 2025-01-16 15:11:19.729092 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:11:19.729119 | orchestrator | Thursday 16 January 2025 15:10:21 +0000 (0:00:01.008) 0:00:33.090 ****** 2025-01-16 15:11:19.729133 | orchestrator | =============================================================================== 2025-01-16 15:11:19.729147 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 23.13s 2025-01-16 15:11:19.729162 | orchestrator | Check ceph keys --------------------------------------------------------- 3.05s 2025-01-16 15:11:19.729175 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.37s 2025-01-16 15:11:19.729189 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 1.01s 2025-01-16 15:11:19.729203 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.99s 2025-01-16 15:11:19.729217 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.82s 2025-01-16 15:11:19.729239 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.78s 2025-01-16 15:11:19.729253 | orchestrator | 2025-01-16 15:11:19.729267 | orchestrator | 2025-01-16 15:11:19 | INFO  | Task a50e8657-e1c1-424b-bda6-c92907fa4553 is in state SUCCESS 2025-01-16 15:11:19.729282 | orchestrator | 2025-01-16 15:11:19 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:19.729302 | orchestrator | 2025-01-16 15:11:19 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:19.729419 | orchestrator | 2025-01-16 15:11:19 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:19.731121 | orchestrator | 2025-01-16 15:11:19 | INFO  | Task 1fd2c389-2b57-4b7e-8c29-5c0f5ee45cfb is in state STARTED 2025-01-16 15:11:19.731258 | orchestrator | 2025-01-16 15:11:19 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:22.771138 | orchestrator | 2025-01-16 15:11:19 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:22.771253 | orchestrator | 2025-01-16 15:11:22 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:22.771462 | orchestrator | 2025-01-16 15:11:22 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:22.771490 | orchestrator | 2025-01-16 15:11:22 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:22.772286 | orchestrator | 2025-01-16 15:11:22 | INFO  | Task 4a2e06de-8ad7-4be4-9bfd-e3f480910822 is in state STARTED 2025-01-16 15:11:22.773969 | orchestrator | 2025-01-16 15:11:22 | INFO  | Task 1fd2c389-2b57-4b7e-8c29-5c0f5ee45cfb is in state STARTED 2025-01-16 15:11:22.774330 | orchestrator | 2025-01-16 15:11:22 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:25.823329 | orchestrator | 2025-01-16 15:11:22 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:25.823626 | orchestrator | 2025-01-16 15:11:25 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:25.824164 | orchestrator | 2025-01-16 15:11:25 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:25.824202 | orchestrator | 2025-01-16 15:11:25 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:25.824985 | orchestrator | 2025-01-16 15:11:25 | INFO  | Task 4a2e06de-8ad7-4be4-9bfd-e3f480910822 is in state STARTED 2025-01-16 15:11:25.825174 | orchestrator | 2025-01-16 15:11:25 | INFO  | Task 1fd2c389-2b57-4b7e-8c29-5c0f5ee45cfb is in state SUCCESS 2025-01-16 15:11:25.828593 | orchestrator | 2025-01-16 15:11:25 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:28.854890 | orchestrator | 2025-01-16 15:11:25 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:28.855055 | orchestrator | 2025-01-16 15:11:28 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:28.855321 | orchestrator | 2025-01-16 15:11:28 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:28.855915 | orchestrator | 2025-01-16 15:11:28 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:28.855947 | orchestrator | 2025-01-16 15:11:28 | INFO  | Task 4a2e06de-8ad7-4be4-9bfd-e3f480910822 is in state STARTED 2025-01-16 15:11:28.855965 | orchestrator | 2025-01-16 15:11:28 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:31.884247 | orchestrator | 2025-01-16 15:11:28 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:31.884367 | orchestrator | 2025-01-16 15:11:31 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:31.885478 | orchestrator | 2025-01-16 15:11:31 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:31.885591 | orchestrator | 2025-01-16 15:11:31 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:31.885612 | orchestrator | 2025-01-16 15:11:31 | INFO  | Task 4a2e06de-8ad7-4be4-9bfd-e3f480910822 is in state STARTED 2025-01-16 15:11:31.885633 | orchestrator | 2025-01-16 15:11:31 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:31.886070 | orchestrator | 2025-01-16 15:11:31 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:34.906846 | orchestrator | 2025-01-16 15:11:34 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:34.910501 | orchestrator | 2025-01-16 15:11:34 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:34.910704 | orchestrator | 2025-01-16 15:11:34 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:34.912326 | orchestrator | 2025-01-16 15:11:34 | INFO  | Task 4a2e06de-8ad7-4be4-9bfd-e3f480910822 is in state STARTED 2025-01-16 15:11:34.912755 | orchestrator | 2025-01-16 15:11:34 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:34.917149 | orchestrator | 2025-01-16 15:11:34 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:37.934922 | orchestrator | 2025-01-16 15:11:37 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:37.935978 | orchestrator | 2025-01-16 15:11:37 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:37.936045 | orchestrator | 2025-01-16 15:11:37 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:37.936374 | orchestrator | 2025-01-16 15:11:37 | INFO  | Task 4a2e06de-8ad7-4be4-9bfd-e3f480910822 is in state STARTED 2025-01-16 15:11:37.937139 | orchestrator | 2025-01-16 15:11:37 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:40.963114 | orchestrator | 2025-01-16 15:11:37 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:40.963322 | orchestrator | 2025-01-16 15:11:40 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:40.963964 | orchestrator | 2025-01-16 15:11:40 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:40.964006 | orchestrator | 2025-01-16 15:11:40 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:40.964469 | orchestrator | 2025-01-16 15:11:40 | INFO  | Task 4a2e06de-8ad7-4be4-9bfd-e3f480910822 is in state STARTED 2025-01-16 15:11:40.965120 | orchestrator | 2025-01-16 15:11:40 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:44.012298 | orchestrator | 2025-01-16 15:11:40 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:44.012390 | orchestrator | 2025-01-16 15:11:44 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:44.012779 | orchestrator | 2025-01-16 15:11:44 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:44.012791 | orchestrator | 2025-01-16 15:11:44 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:44.012800 | orchestrator | 2025-01-16 15:11:44 | INFO  | Task 4a2e06de-8ad7-4be4-9bfd-e3f480910822 is in state STARTED 2025-01-16 15:11:44.013403 | orchestrator | 2025-01-16 15:11:44 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:47.043478 | orchestrator | 2025-01-16 15:11:44 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:47.043632 | orchestrator | 2025-01-16 15:11:47 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:47.043646 | orchestrator | 2025-01-16 15:11:47.043655 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-01-16 15:11:47.043663 | orchestrator | 2025-01-16 15:11:47.043671 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-01-16 15:11:47.043679 | orchestrator | Thursday 16 January 2025 15:10:25 +0000 (0:00:01.537) 0:00:01.537 ****** 2025-01-16 15:11:47.043687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-01-16 15:11:47.043696 | orchestrator | 2025-01-16 15:11:47.043704 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-01-16 15:11:47.043711 | orchestrator | Thursday 16 January 2025 15:10:26 +0000 (0:00:01.238) 0:00:02.776 ****** 2025-01-16 15:11:47.043720 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-01-16 15:11:47.043727 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-01-16 15:11:47.043735 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-01-16 15:11:47.043743 | orchestrator | 2025-01-16 15:11:47.043751 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-01-16 15:11:47.043759 | orchestrator | Thursday 16 January 2025 15:10:27 +0000 (0:00:01.429) 0:00:04.205 ****** 2025-01-16 15:11:47.043767 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-01-16 15:11:47.043774 | orchestrator | 2025-01-16 15:11:47.043782 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-01-16 15:11:47.043790 | orchestrator | Thursday 16 January 2025 15:10:29 +0000 (0:00:01.384) 0:00:05.590 ****** 2025-01-16 15:11:47.043798 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:47.043806 | orchestrator | 2025-01-16 15:11:47.043814 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-01-16 15:11:47.043821 | orchestrator | Thursday 16 January 2025 15:10:30 +0000 (0:00:01.225) 0:00:06.815 ****** 2025-01-16 15:11:47.043829 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:47.043836 | orchestrator | 2025-01-16 15:11:47.043844 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-01-16 15:11:47.043852 | orchestrator | Thursday 16 January 2025 15:10:31 +0000 (0:00:01.179) 0:00:07.995 ****** 2025-01-16 15:11:47.043860 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-01-16 15:11:47.043868 | orchestrator | ok: [testbed-manager] 2025-01-16 15:11:47.043875 | orchestrator | 2025-01-16 15:11:47.043883 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-01-16 15:11:47.043891 | orchestrator | Thursday 16 January 2025 15:11:04 +0000 (0:00:33.004) 0:00:40.999 ****** 2025-01-16 15:11:47.043917 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-01-16 15:11:47.043925 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-01-16 15:11:47.043932 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-01-16 15:11:47.043940 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-01-16 15:11:47.043948 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-01-16 15:11:47.043955 | orchestrator | 2025-01-16 15:11:47.043963 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-01-16 15:11:47.043970 | orchestrator | Thursday 16 January 2025 15:11:08 +0000 (0:00:03.507) 0:00:44.507 ****** 2025-01-16 15:11:47.043978 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-01-16 15:11:47.043985 | orchestrator | 2025-01-16 15:11:47.043993 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-01-16 15:11:47.044000 | orchestrator | Thursday 16 January 2025 15:11:09 +0000 (0:00:01.018) 0:00:45.525 ****** 2025-01-16 15:11:47.044008 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:11:47.044015 | orchestrator | 2025-01-16 15:11:47.044023 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-01-16 15:11:47.044030 | orchestrator | Thursday 16 January 2025 15:11:09 +0000 (0:00:00.781) 0:00:46.307 ****** 2025-01-16 15:11:47.044038 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:11:47.044045 | orchestrator | 2025-01-16 15:11:47.044053 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-01-16 15:11:47.044061 | orchestrator | Thursday 16 January 2025 15:11:10 +0000 (0:00:00.773) 0:00:47.081 ****** 2025-01-16 15:11:47.044068 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:47.044076 | orchestrator | 2025-01-16 15:11:47.044085 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-01-16 15:11:47.044093 | orchestrator | Thursday 16 January 2025 15:11:14 +0000 (0:00:03.512) 0:00:50.594 ****** 2025-01-16 15:11:47.044101 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:47.044110 | orchestrator | 2025-01-16 15:11:47.044118 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-01-16 15:11:47.044127 | orchestrator | Thursday 16 January 2025 15:11:15 +0000 (0:00:01.657) 0:00:52.251 ****** 2025-01-16 15:11:47.044135 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:47.044144 | orchestrator | 2025-01-16 15:11:47.044152 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-01-16 15:11:47.044161 | orchestrator | Thursday 16 January 2025 15:11:17 +0000 (0:00:01.243) 0:00:53.495 ****** 2025-01-16 15:11:47.044169 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-01-16 15:11:47.044178 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-01-16 15:11:47.044186 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-01-16 15:11:47.044195 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-01-16 15:11:47.044203 | orchestrator | 2025-01-16 15:11:47.044212 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:11:47.044236 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:11:50.065418 | orchestrator | 2025-01-16 15:11:50.065597 | orchestrator | 2025-01-16 15:11:50.065616 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:11:50.065625 | orchestrator | Thursday 16 January 2025 15:11:19 +0000 (0:00:02.244) 0:00:55.740 ****** 2025-01-16 15:11:50.065631 | orchestrator | =============================================================================== 2025-01-16 15:11:50.065636 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 33.00s 2025-01-16 15:11:50.065642 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 3.51s 2025-01-16 15:11:50.065650 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.51s 2025-01-16 15:11:50.065658 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 2.24s 2025-01-16 15:11:50.065666 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.66s 2025-01-16 15:11:50.065697 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.43s 2025-01-16 15:11:50.065702 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.38s 2025-01-16 15:11:50.065707 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 1.24s 2025-01-16 15:11:50.065712 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.24s 2025-01-16 15:11:50.065717 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.23s 2025-01-16 15:11:50.065722 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.18s 2025-01-16 15:11:50.065727 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.02s 2025-01-16 15:11:50.065731 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.78s 2025-01-16 15:11:50.065737 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.77s 2025-01-16 15:11:50.065741 | orchestrator | 2025-01-16 15:11:50.065746 | orchestrator | None 2025-01-16 15:11:50.065752 | orchestrator | 2025-01-16 15:11:47 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:50.065758 | orchestrator | 2025-01-16 15:11:47 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:50.065763 | orchestrator | 2025-01-16 15:11:47 | INFO  | Task 4a2e06de-8ad7-4be4-9bfd-e3f480910822 is in state SUCCESS 2025-01-16 15:11:50.065768 | orchestrator | 2025-01-16 15:11:47 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:50.065773 | orchestrator | 2025-01-16 15:11:47 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:50.065789 | orchestrator | 2025-01-16 15:11:50 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:50.066112 | orchestrator | 2025-01-16 15:11:50 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:50.069199 | orchestrator | 2025-01-16 15:11:50 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:50.069467 | orchestrator | 2025-01-16 15:11:50 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state STARTED 2025-01-16 15:11:50.069561 | orchestrator | 2025-01-16 15:11:50 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:53.094646 | orchestrator | 2025-01-16 15:11:53 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:53.094776 | orchestrator | 2025-01-16 15:11:53 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:53.096298 | orchestrator | 2025-01-16 15:11:53 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:53.096632 | orchestrator | 2025-01-16 15:11:53 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:11:53.096676 | orchestrator | 2025-01-16 15:11:53 | INFO  | Task 1389b10c-9cf6-465f-9347-2baf85abefff is in state SUCCESS 2025-01-16 15:11:53.096691 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-01-16 15:11:53.096704 | orchestrator | 2025-01-16 15:11:53.096716 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-01-16 15:11:53.096729 | orchestrator | 2025-01-16 15:11:53.096742 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-01-16 15:11:53.096754 | orchestrator | Thursday 16 January 2025 15:11:22 +0000 (0:00:00.402) 0:00:00.402 ****** 2025-01-16 15:11:53.096766 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:53.096819 | orchestrator | 2025-01-16 15:11:53.096835 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-01-16 15:11:53.096846 | orchestrator | Thursday 16 January 2025 15:11:24 +0000 (0:00:01.683) 0:00:02.085 ****** 2025-01-16 15:11:53.097205 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:53.097228 | orchestrator | 2025-01-16 15:11:53.097238 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-01-16 15:11:53.097248 | orchestrator | Thursday 16 January 2025 15:11:25 +0000 (0:00:00.756) 0:00:02.842 ****** 2025-01-16 15:11:53.097257 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:53.097267 | orchestrator | 2025-01-16 15:11:53.097277 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-01-16 15:11:53.097287 | orchestrator | Thursday 16 January 2025 15:11:26 +0000 (0:00:00.702) 0:00:03.544 ****** 2025-01-16 15:11:53.097297 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:53.097307 | orchestrator | 2025-01-16 15:11:53.097316 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-01-16 15:11:53.097326 | orchestrator | Thursday 16 January 2025 15:11:26 +0000 (0:00:00.742) 0:00:04.287 ****** 2025-01-16 15:11:53.097337 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:53.097347 | orchestrator | 2025-01-16 15:11:53.097357 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-01-16 15:11:53.097368 | orchestrator | Thursday 16 January 2025 15:11:27 +0000 (0:00:00.747) 0:00:05.034 ****** 2025-01-16 15:11:53.097438 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:53.097451 | orchestrator | 2025-01-16 15:11:53.097461 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-01-16 15:11:53.097486 | orchestrator | Thursday 16 January 2025 15:11:28 +0000 (0:00:00.786) 0:00:05.821 ****** 2025-01-16 15:11:53.097494 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:53.097501 | orchestrator | 2025-01-16 15:11:53.097527 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-01-16 15:11:53.097538 | orchestrator | Thursday 16 January 2025 15:11:29 +0000 (0:00:01.349) 0:00:07.170 ****** 2025-01-16 15:11:53.097915 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:53.097949 | orchestrator | 2025-01-16 15:11:53.097961 | orchestrator | TASK [Create admin user] ******************************************************* 2025-01-16 15:11:53.097972 | orchestrator | Thursday 16 January 2025 15:11:30 +0000 (0:00:01.015) 0:00:08.186 ****** 2025-01-16 15:11:53.097982 | orchestrator | changed: [testbed-manager] 2025-01-16 15:11:53.097992 | orchestrator | 2025-01-16 15:11:53.098223 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-01-16 15:11:53.098239 | orchestrator | Thursday 16 January 2025 15:11:41 +0000 (0:00:10.660) 0:00:18.847 ****** 2025-01-16 15:11:53.098246 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:11:53.098252 | orchestrator | 2025-01-16 15:11:53.098259 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-01-16 15:11:53.098265 | orchestrator | 2025-01-16 15:11:53.098271 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-01-16 15:11:53.098277 | orchestrator | Thursday 16 January 2025 15:11:41 +0000 (0:00:00.506) 0:00:19.354 ****** 2025-01-16 15:11:53.098283 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:11:53.098289 | orchestrator | 2025-01-16 15:11:53.098296 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-01-16 15:11:53.098302 | orchestrator | 2025-01-16 15:11:53.098308 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-01-16 15:11:53.098314 | orchestrator | Thursday 16 January 2025 15:11:43 +0000 (0:00:01.489) 0:00:20.843 ****** 2025-01-16 15:11:53.098320 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:11:53.098326 | orchestrator | 2025-01-16 15:11:53.098332 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-01-16 15:11:53.098339 | orchestrator | 2025-01-16 15:11:53.098347 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-01-16 15:11:53.098357 | orchestrator | Thursday 16 January 2025 15:11:44 +0000 (0:00:01.276) 0:00:22.120 ****** 2025-01-16 15:11:53.098366 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:11:53.098376 | orchestrator | 2025-01-16 15:11:53.098385 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:11:53.098408 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-01-16 15:11:53.098420 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:11:53.098429 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:11:53.098440 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:11:53.098449 | orchestrator | 2025-01-16 15:11:53.098459 | orchestrator | 2025-01-16 15:11:53.098468 | orchestrator | 2025-01-16 15:11:53.098478 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:11:53.098488 | orchestrator | Thursday 16 January 2025 15:11:45 +0000 (0:00:01.016) 0:00:23.136 ****** 2025-01-16 15:11:53.098498 | orchestrator | =============================================================================== 2025-01-16 15:11:53.098572 | orchestrator | Create admin user ------------------------------------------------------ 10.66s 2025-01-16 15:11:53.098582 | orchestrator | Restart ceph manager service -------------------------------------------- 3.78s 2025-01-16 15:11:53.098589 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.68s 2025-01-16 15:11:53.098596 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.35s 2025-01-16 15:11:53.098603 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.02s 2025-01-16 15:11:53.098610 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.79s 2025-01-16 15:11:53.098617 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.76s 2025-01-16 15:11:53.098624 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.75s 2025-01-16 15:11:53.098631 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.74s 2025-01-16 15:11:53.098638 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.70s 2025-01-16 15:11:53.098645 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.51s 2025-01-16 15:11:53.098656 | orchestrator | 2025-01-16 15:11:53.098663 | orchestrator | 2025-01-16 15:11:53.098673 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:11:53.098680 | orchestrator | 2025-01-16 15:11:53.098687 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:11:53.098694 | orchestrator | Thursday 16 January 2025 15:10:10 +0000 (0:00:00.247) 0:00:00.248 ****** 2025-01-16 15:11:53.098701 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:11:53.098708 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:11:53.098715 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:11:53.098727 | orchestrator | 2025-01-16 15:11:53.098734 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:11:53.098741 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.304) 0:00:00.552 ****** 2025-01-16 15:11:53.098748 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-01-16 15:11:53.098755 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-01-16 15:11:53.098762 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-01-16 15:11:53.098770 | orchestrator | 2025-01-16 15:11:53.098777 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-01-16 15:11:53.098783 | orchestrator | 2025-01-16 15:11:53.098790 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-01-16 15:11:53.098797 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.318) 0:00:00.871 ****** 2025-01-16 15:11:53.098805 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:11:53.098813 | orchestrator | 2025-01-16 15:11:53.098825 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-01-16 15:11:53.098832 | orchestrator | Thursday 16 January 2025 15:10:12 +0000 (0:00:00.727) 0:00:01.599 ****** 2025-01-16 15:11:53.098839 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-01-16 15:11:53.098847 | orchestrator | 2025-01-16 15:11:53.098854 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-01-16 15:11:53.098861 | orchestrator | Thursday 16 January 2025 15:10:14 +0000 (0:00:02.296) 0:00:03.896 ****** 2025-01-16 15:11:53.098868 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-01-16 15:11:53.098875 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-01-16 15:11:53.098882 | orchestrator | 2025-01-16 15:11:53.098888 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-01-16 15:11:53.098894 | orchestrator | Thursday 16 January 2025 15:10:18 +0000 (0:00:04.308) 0:00:08.205 ****** 2025-01-16 15:11:53.098900 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-01-16 15:11:53.098907 | orchestrator | 2025-01-16 15:11:53.098913 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-01-16 15:11:53.098919 | orchestrator | Thursday 16 January 2025 15:10:21 +0000 (0:00:02.406) 0:00:10.611 ****** 2025-01-16 15:11:53.098925 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:11:53.098932 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-01-16 15:11:53.098938 | orchestrator | 2025-01-16 15:11:53.098944 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-01-16 15:11:53.098950 | orchestrator | Thursday 16 January 2025 15:10:23 +0000 (0:00:02.778) 0:00:13.390 ****** 2025-01-16 15:11:53.098957 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:11:53.098963 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-01-16 15:11:53.098969 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-01-16 15:11:53.098975 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-01-16 15:11:53.098982 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-01-16 15:11:53.098988 | orchestrator | 2025-01-16 15:11:53.098994 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-01-16 15:11:53.099000 | orchestrator | Thursday 16 January 2025 15:10:35 +0000 (0:00:11.276) 0:00:24.667 ****** 2025-01-16 15:11:53.099006 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-01-16 15:11:53.099013 | orchestrator | 2025-01-16 15:11:53.099019 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-01-16 15:11:53.099025 | orchestrator | Thursday 16 January 2025 15:10:38 +0000 (0:00:03.345) 0:00:28.012 ****** 2025-01-16 15:11:53.099053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.099063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.099076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.099102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099135 | orchestrator | 2025-01-16 15:11:53.099142 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-01-16 15:11:53.099148 | orchestrator | Thursday 16 January 2025 15:10:40 +0000 (0:00:01.995) 0:00:30.007 ****** 2025-01-16 15:11:53.099154 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-01-16 15:11:53.099161 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-01-16 15:11:53.099167 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-01-16 15:11:53.099173 | orchestrator | 2025-01-16 15:11:53.099179 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-01-16 15:11:53.099186 | orchestrator | Thursday 16 January 2025 15:10:43 +0000 (0:00:02.694) 0:00:32.702 ****** 2025-01-16 15:11:53.099192 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:11:53.099199 | orchestrator | 2025-01-16 15:11:53.099205 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-01-16 15:11:53.099212 | orchestrator | Thursday 16 January 2025 15:10:43 +0000 (0:00:00.271) 0:00:32.974 ****** 2025-01-16 15:11:53.099222 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:11:53.099233 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:11:53.099244 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:11:53.099254 | orchestrator | 2025-01-16 15:11:53.099265 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-01-16 15:11:53.099276 | orchestrator | Thursday 16 January 2025 15:10:43 +0000 (0:00:00.438) 0:00:33.413 ****** 2025-01-16 15:11:53.099287 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:11:53.099298 | orchestrator | 2025-01-16 15:11:53.099310 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-01-16 15:11:53.099321 | orchestrator | Thursday 16 January 2025 15:10:44 +0000 (0:00:00.711) 0:00:34.125 ****** 2025-01-16 15:11:53.099346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.099366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.099379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.099391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099474 | orchestrator | 2025-01-16 15:11:53.099485 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-01-16 15:11:53.099496 | orchestrator | Thursday 16 January 2025 15:10:48 +0000 (0:00:03.431) 0:00:37.556 ****** 2025-01-16 15:11:53.099530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:11:53.099550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:11:53.099580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099612 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:11:53.099622 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:11:53.099638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:11:53.099655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099677 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:11:53.099687 | orchestrator | 2025-01-16 15:11:53.099698 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-01-16 15:11:53.099708 | orchestrator | Thursday 16 January 2025 15:10:48 +0000 (0:00:00.721) 0:00:38.277 ****** 2025-01-16 15:11:53.099719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:11:53.099730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099756 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:11:53.099773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:11:53.099784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099805 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:11:53.099814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:11:53.099830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.099858 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:11:53.099869 | orchestrator | 2025-01-16 15:11:53.099879 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-01-16 15:11:53.099889 | orchestrator | Thursday 16 January 2025 15:10:50 +0000 (0:00:01.734) 0:00:40.011 ****** 2025-01-16 15:11:53.099900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.099911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.099922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.099943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.099986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100011 | orchestrator | 2025-01-16 15:11:53.100021 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-01-16 15:11:53.100032 | orchestrator | Thursday 16 January 2025 15:10:53 +0000 (0:00:03.048) 0:00:43.060 ****** 2025-01-16 15:11:53.100042 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:11:53.100052 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:11:53.100061 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:11:53.100071 | orchestrator | 2025-01-16 15:11:53.100081 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-01-16 15:11:53.100092 | orchestrator | Thursday 16 January 2025 15:10:55 +0000 (0:00:02.382) 0:00:45.442 ****** 2025-01-16 15:11:53.100102 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:11:53.100112 | orchestrator | 2025-01-16 15:11:53.100122 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-01-16 15:11:53.100132 | orchestrator | Thursday 16 January 2025 15:10:57 +0000 (0:00:01.905) 0:00:47.348 ****** 2025-01-16 15:11:53.100142 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:11:53.100157 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:11:53.100167 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:11:53.100177 | orchestrator | 2025-01-16 15:11:53.100188 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-01-16 15:11:53.100197 | orchestrator | Thursday 16 January 2025 15:10:58 +0000 (0:00:01.087) 0:00:48.435 ****** 2025-01-16 15:11:53.100208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.100219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.100235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.100246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100321 | orchestrator | 2025-01-16 15:11:53.100331 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-01-16 15:11:53.100341 | orchestrator | Thursday 16 January 2025 15:11:08 +0000 (0:00:09.490) 0:00:57.926 ****** 2025-01-16 15:11:53.100357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:11:53.100369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.100379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.100390 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:11:53.100400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:11:53.100416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.100427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.100437 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:11:53.100455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-01-16 15:11:53.100465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.100476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:11:53.100491 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:11:53.100502 | orchestrator | 2025-01-16 15:11:53.100528 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-01-16 15:11:53.100539 | orchestrator | Thursday 16 January 2025 15:11:10 +0000 (0:00:02.214) 0:01:00.141 ****** 2025-01-16 15:11:53.100549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.100560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.100575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-01-16 15:11:53.100582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:11:53.100630 | orchestrator | 2025-01-16 15:11:53.100636 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-01-16 15:11:53.100642 | orchestrator | Thursday 16 January 2025 15:11:14 +0000 (0:00:03.708) 0:01:03.850 ****** 2025-01-16 15:11:53.100648 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:11:53.100660 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:11:53.100666 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:11:53.100672 | orchestrator | 2025-01-16 15:11:53.100678 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-01-16 15:11:53.100685 | orchestrator | Thursday 16 January 2025 15:11:15 +0000 (0:00:01.353) 0:01:05.203 ****** 2025-01-16 15:11:53.100691 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:11:53.100697 | orchestrator | 2025-01-16 15:11:53.100703 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-01-16 15:11:53.100709 | orchestrator | Thursday 16 January 2025 15:11:19 +0000 (0:00:03.381) 0:01:08.584 ****** 2025-01-16 15:11:53.100715 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:11:53.100721 | orchestrator | 2025-01-16 15:11:53.100727 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-01-16 15:11:53.100733 | orchestrator | Thursday 16 January 2025 15:11:21 +0000 (0:00:02.300) 0:01:10.885 ****** 2025-01-16 15:11:53.100739 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:11:53.100746 | orchestrator | 2025-01-16 15:11:53.100752 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-01-16 15:11:53.100758 | orchestrator | Thursday 16 January 2025 15:11:30 +0000 (0:00:08.673) 0:01:19.559 ****** 2025-01-16 15:11:53.100764 | orchestrator | 2025-01-16 15:11:53.100770 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-01-16 15:11:53.100776 | orchestrator | Thursday 16 January 2025 15:11:30 +0000 (0:00:00.107) 0:01:19.667 ****** 2025-01-16 15:11:53.100782 | orchestrator | 2025-01-16 15:11:53.100788 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-01-16 15:11:53.100794 | orchestrator | Thursday 16 January 2025 15:11:30 +0000 (0:00:00.382) 0:01:20.050 ****** 2025-01-16 15:11:53.100800 | orchestrator | 2025-01-16 15:11:53.100811 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-01-16 15:11:53.100817 | orchestrator | Thursday 16 January 2025 15:11:30 +0000 (0:00:00.176) 0:01:20.227 ****** 2025-01-16 15:11:53.100823 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:11:53.100829 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:11:53.100835 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:11:53.100842 | orchestrator | 2025-01-16 15:11:53.100848 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-01-16 15:11:53.100854 | orchestrator | Thursday 16 January 2025 15:11:40 +0000 (0:00:10.286) 0:01:30.513 ****** 2025-01-16 15:11:53.100860 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:11:53.100866 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:11:53.100872 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:11:53.100878 | orchestrator | 2025-01-16 15:11:53.100884 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-01-16 15:11:53.100890 | orchestrator | Thursday 16 January 2025 15:11:44 +0000 (0:00:04.021) 0:01:34.535 ****** 2025-01-16 15:11:53.100896 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:11:53.100902 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:11:53.100908 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:11:53.100914 | orchestrator | 2025-01-16 15:11:53.100920 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:11:53.100927 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-01-16 15:11:53.100934 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:11:53.100941 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:11:53.100947 | orchestrator | 2025-01-16 15:11:53.100953 | orchestrator | 2025-01-16 15:11:53.100959 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:11:53.100965 | orchestrator | Thursday 16 January 2025 15:11:51 +0000 (0:00:06.151) 0:01:40.687 ****** 2025-01-16 15:11:53.100975 | orchestrator | =============================================================================== 2025-01-16 15:11:53.100981 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 11.28s 2025-01-16 15:11:53.100987 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.29s 2025-01-16 15:11:53.100993 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.49s 2025-01-16 15:11:53.101003 | orchestrator | barbican : Running barbican bootstrap container ------------------------- 8.67s 2025-01-16 15:11:56.121459 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.15s 2025-01-16 15:11:56.121677 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 4.31s 2025-01-16 15:11:56.121710 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.02s 2025-01-16 15:11:56.121726 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.71s 2025-01-16 15:11:56.121741 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.43s 2025-01-16 15:11:56.121755 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.38s 2025-01-16 15:11:56.121770 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.35s 2025-01-16 15:11:56.121784 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.05s 2025-01-16 15:11:56.121798 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 2.78s 2025-01-16 15:11:56.121811 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.69s 2025-01-16 15:11:56.121834 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 2.41s 2025-01-16 15:11:56.121858 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.38s 2025-01-16 15:11:56.121881 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.30s 2025-01-16 15:11:56.121905 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 2.30s 2025-01-16 15:11:56.121930 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.21s 2025-01-16 15:11:56.121954 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.00s 2025-01-16 15:11:56.121974 | orchestrator | 2025-01-16 15:11:53 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:56.122008 | orchestrator | 2025-01-16 15:11:56 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:59.153629 | orchestrator | 2025-01-16 15:11:56 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:59.153728 | orchestrator | 2025-01-16 15:11:56 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:59.153739 | orchestrator | 2025-01-16 15:11:56 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:11:59.153748 | orchestrator | 2025-01-16 15:11:56 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:11:59.153849 | orchestrator | 2025-01-16 15:11:59 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:11:59.154456 | orchestrator | 2025-01-16 15:11:59 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:11:59.154491 | orchestrator | 2025-01-16 15:11:59 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:11:59.154531 | orchestrator | 2025-01-16 15:11:59 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:02.180053 | orchestrator | 2025-01-16 15:11:59 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:02.180167 | orchestrator | 2025-01-16 15:12:02 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:05.213231 | orchestrator | 2025-01-16 15:12:02 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:05.213455 | orchestrator | 2025-01-16 15:12:02 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:05.213487 | orchestrator | 2025-01-16 15:12:02 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:05.213568 | orchestrator | 2025-01-16 15:12:02 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:05.213605 | orchestrator | 2025-01-16 15:12:05 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:05.214153 | orchestrator | 2025-01-16 15:12:05 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:05.214184 | orchestrator | 2025-01-16 15:12:05 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:05.214208 | orchestrator | 2025-01-16 15:12:05 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:08.235212 | orchestrator | 2025-01-16 15:12:05 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:08.235323 | orchestrator | 2025-01-16 15:12:08 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:08.236999 | orchestrator | 2025-01-16 15:12:08 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:08.237073 | orchestrator | 2025-01-16 15:12:08 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:08.237095 | orchestrator | 2025-01-16 15:12:08 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:11.256621 | orchestrator | 2025-01-16 15:12:08 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:11.256745 | orchestrator | 2025-01-16 15:12:11 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:11.258148 | orchestrator | 2025-01-16 15:12:11 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:11.258387 | orchestrator | 2025-01-16 15:12:11 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:14.286637 | orchestrator | 2025-01-16 15:12:11 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:14.286759 | orchestrator | 2025-01-16 15:12:11 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:14.286784 | orchestrator | 2025-01-16 15:12:14 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:14.286964 | orchestrator | 2025-01-16 15:12:14 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:14.286979 | orchestrator | 2025-01-16 15:12:14 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:14.286990 | orchestrator | 2025-01-16 15:12:14 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:14.287153 | orchestrator | 2025-01-16 15:12:14 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:17.323699 | orchestrator | 2025-01-16 15:12:17 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:17.324596 | orchestrator | 2025-01-16 15:12:17 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:17.324909 | orchestrator | 2025-01-16 15:12:17 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:17.325412 | orchestrator | 2025-01-16 15:12:17 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:20.353069 | orchestrator | 2025-01-16 15:12:17 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:20.353345 | orchestrator | 2025-01-16 15:12:20 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:20.353797 | orchestrator | 2025-01-16 15:12:20 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:20.353834 | orchestrator | 2025-01-16 15:12:20 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:20.353859 | orchestrator | 2025-01-16 15:12:20 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:23.393131 | orchestrator | 2025-01-16 15:12:20 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:23.393232 | orchestrator | 2025-01-16 15:12:23 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:23.393497 | orchestrator | 2025-01-16 15:12:23 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:23.393555 | orchestrator | 2025-01-16 15:12:23 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:23.393569 | orchestrator | 2025-01-16 15:12:23 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:26.415858 | orchestrator | 2025-01-16 15:12:23 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:26.415951 | orchestrator | 2025-01-16 15:12:26 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:26.416200 | orchestrator | 2025-01-16 15:12:26 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:26.416216 | orchestrator | 2025-01-16 15:12:26 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:26.416625 | orchestrator | 2025-01-16 15:12:26 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:26.417682 | orchestrator | 2025-01-16 15:12:26 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:29.440604 | orchestrator | 2025-01-16 15:12:29 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:29.442202 | orchestrator | 2025-01-16 15:12:29 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:29.442499 | orchestrator | 2025-01-16 15:12:29 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:29.442991 | orchestrator | 2025-01-16 15:12:29 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:32.475773 | orchestrator | 2025-01-16 15:12:29 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:32.476058 | orchestrator | 2025-01-16 15:12:32 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:32.476348 | orchestrator | 2025-01-16 15:12:32 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:32.476374 | orchestrator | 2025-01-16 15:12:32 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:32.476454 | orchestrator | 2025-01-16 15:12:32 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:32.476474 | orchestrator | 2025-01-16 15:12:32 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:35.497789 | orchestrator | 2025-01-16 15:12:35 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state STARTED 2025-01-16 15:12:35.497942 | orchestrator | 2025-01-16 15:12:35 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:35.499633 | orchestrator | 2025-01-16 15:12:35 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:35.499974 | orchestrator | 2025-01-16 15:12:35 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:38.520854 | orchestrator | 2025-01-16 15:12:35 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:38.520961 | orchestrator | 2025-01-16 15:12:38 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:12:38.522850 | orchestrator | 2025-01-16 15:12:38 | INFO  | Task 7a0b1832-e7d1-4b32-aa2f-7c1586e1ce73 is in state SUCCESS 2025-01-16 15:12:38.522945 | orchestrator | 2025-01-16 15:12:38.522957 | orchestrator | 2025-01-16 15:12:38.522965 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:12:38.522975 | orchestrator | 2025-01-16 15:12:38.522984 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:12:38.522999 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.321) 0:00:00.321 ****** 2025-01-16 15:12:38.523007 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:12:38.523015 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:12:38.523024 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:12:38.523035 | orchestrator | 2025-01-16 15:12:38.523042 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:12:38.523049 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.493) 0:00:00.814 ****** 2025-01-16 15:12:38.523057 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-01-16 15:12:38.523066 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-01-16 15:12:38.523075 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-01-16 15:12:38.523082 | orchestrator | 2025-01-16 15:12:38.523091 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-01-16 15:12:38.523098 | orchestrator | 2025-01-16 15:12:38.523105 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-01-16 15:12:38.523112 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.236) 0:00:01.050 ****** 2025-01-16 15:12:38.523121 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:12:38.523130 | orchestrator | 2025-01-16 15:12:38.523138 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-01-16 15:12:38.523147 | orchestrator | Thursday 16 January 2025 15:10:12 +0000 (0:00:00.520) 0:00:01.570 ****** 2025-01-16 15:12:38.523155 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-01-16 15:12:38.523204 | orchestrator | 2025-01-16 15:12:38.523212 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-01-16 15:12:38.523219 | orchestrator | Thursday 16 January 2025 15:10:15 +0000 (0:00:02.582) 0:00:04.153 ****** 2025-01-16 15:12:38.523228 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-01-16 15:12:38.523236 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-01-16 15:12:38.523243 | orchestrator | 2025-01-16 15:12:38.523251 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-01-16 15:12:38.523259 | orchestrator | Thursday 16 January 2025 15:10:19 +0000 (0:00:04.263) 0:00:08.416 ****** 2025-01-16 15:12:38.523271 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:12:38.523279 | orchestrator | 2025-01-16 15:12:38.523287 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-01-16 15:12:38.523294 | orchestrator | Thursday 16 January 2025 15:10:21 +0000 (0:00:02.311) 0:00:10.728 ****** 2025-01-16 15:12:38.523302 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:12:38.523310 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-01-16 15:12:38.523317 | orchestrator | 2025-01-16 15:12:38.523325 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-01-16 15:12:38.523333 | orchestrator | Thursday 16 January 2025 15:10:24 +0000 (0:00:02.579) 0:00:13.308 ****** 2025-01-16 15:12:38.523714 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:12:38.523732 | orchestrator | 2025-01-16 15:12:38.523742 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-01-16 15:12:38.523750 | orchestrator | Thursday 16 January 2025 15:10:26 +0000 (0:00:02.220) 0:00:15.529 ****** 2025-01-16 15:12:38.523759 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-01-16 15:12:38.523767 | orchestrator | 2025-01-16 15:12:38.523775 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-01-16 15:12:38.523785 | orchestrator | Thursday 16 January 2025 15:10:29 +0000 (0:00:03.014) 0:00:18.543 ****** 2025-01-16 15:12:38.523796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.523819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.523828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.523837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.523856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.523865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.524626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.524650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.524657 | orchestrator | 2025-01-16 15:12:38.524665 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-01-16 15:12:38.524693 | orchestrator | Thursday 16 January 2025 15:10:31 +0000 (0:00:02.094) 0:00:20.637 ****** 2025-01-16 15:12:38.524701 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:38.524710 | orchestrator | 2025-01-16 15:12:38.524718 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-01-16 15:12:38.524725 | orchestrator | Thursday 16 January 2025 15:10:31 +0000 (0:00:00.080) 0:00:20.718 ****** 2025-01-16 15:12:38.524733 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:38.524740 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:38.524748 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:38.524756 | orchestrator | 2025-01-16 15:12:38.524764 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-01-16 15:12:38.524772 | orchestrator | Thursday 16 January 2025 15:10:31 +0000 (0:00:00.249) 0:00:20.968 ****** 2025-01-16 15:12:38.524780 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:12:38.524788 | orchestrator | 2025-01-16 15:12:38.524796 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-01-16 15:12:38.524803 | orchestrator | Thursday 16 January 2025 15:10:32 +0000 (0:00:00.394) 0:00:21.362 ****** 2025-01-16 15:12:38.524811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.524826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.524835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.524842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.524995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525031 | orchestrator | 2025-01-16 15:12:38.525039 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-01-16 15:12:38.525047 | orchestrator | Thursday 16 January 2025 15:10:35 +0000 (0:00:03.761) 0:00:25.123 ****** 2025-01-16 15:12:38.525059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.525067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:12:38.525075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525124 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:38.525132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.525140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:12:38.525148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.525211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:12:38.525227 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:38.525234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525272 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:38.525280 | orchestrator | 2025-01-16 15:12:38.525308 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-01-16 15:12:38.525326 | orchestrator | Thursday 16 January 2025 15:10:37 +0000 (0:00:01.927) 0:00:27.050 ****** 2025-01-16 15:12:38.525335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.525345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:12:38.525355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525411 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:38.525421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.525431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:12:38.525440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525480 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:38.525528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.525538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:12:38.525546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525582 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:38.525591 | orchestrator | 2025-01-16 15:12:38.525599 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-01-16 15:12:38.525607 | orchestrator | Thursday 16 January 2025 15:10:39 +0000 (0:00:01.814) 0:00:28.865 ****** 2025-01-16 15:12:38.525633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.525641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.525649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.525657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.525882 | orchestrator | 2025-01-16 15:12:38.525889 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-01-16 15:12:38.525897 | orchestrator | Thursday 16 January 2025 15:10:44 +0000 (0:00:05.037) 0:00:33.902 ****** 2025-01-16 15:12:38.525905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.525913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.525940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.525968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.525992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526256 | orchestrator | 2025-01-16 15:12:38.526265 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-01-16 15:12:38.526274 | orchestrator | Thursday 16 January 2025 15:11:01 +0000 (0:00:16.604) 0:00:50.507 ****** 2025-01-16 15:12:38.526282 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-01-16 15:12:38.526292 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-01-16 15:12:38.526300 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-01-16 15:12:38.526308 | orchestrator | 2025-01-16 15:12:38.526317 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-01-16 15:12:38.526325 | orchestrator | Thursday 16 January 2025 15:11:08 +0000 (0:00:07.109) 0:00:57.616 ****** 2025-01-16 15:12:38.526333 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-01-16 15:12:38.526341 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-01-16 15:12:38.526350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-01-16 15:12:38.526359 | orchestrator | 2025-01-16 15:12:38.526366 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-01-16 15:12:38.526374 | orchestrator | Thursday 16 January 2025 15:11:13 +0000 (0:00:05.317) 0:01:02.934 ****** 2025-01-16 15:12:38.526389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.526399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.526461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.526468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526659 | orchestrator | 2025-01-16 15:12:38.526667 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-01-16 15:12:38.526680 | orchestrator | Thursday 16 January 2025 15:11:19 +0000 (0:00:05.694) 0:01:08.629 ****** 2025-01-16 15:12:38.526689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.526706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.526716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.526737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.526916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.526924 | orchestrator | 2025-01-16 15:12:38.526932 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-01-16 15:12:38.526940 | orchestrator | Thursday 16 January 2025 15:11:24 +0000 (0:00:05.378) 0:01:14.008 ****** 2025-01-16 15:12:38.526950 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:38.526960 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:38.526972 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:38.526980 | orchestrator | 2025-01-16 15:12:38.526989 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-01-16 15:12:38.526998 | orchestrator | Thursday 16 January 2025 15:11:25 +0000 (0:00:00.345) 0:01:14.354 ****** 2025-01-16 15:12:38.527008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.527017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:12:38.527046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527098 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:38.527106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.527125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:12:38.527134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527182 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:38.527194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-01-16 15:12:38.527215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-01-16 15:12:38.527225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527270 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:38.527278 | orchestrator | 2025-01-16 15:12:38.527287 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-01-16 15:12:38.527295 | orchestrator | Thursday 16 January 2025 15:11:26 +0000 (0:00:00.947) 0:01:15.302 ****** 2025-01-16 15:12:38.527316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.527327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.527336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-01-16 15:12:38.527345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-01-16 15:12:38.527603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-01-16 15:12:38.527611 | orchestrator | 2025-01-16 15:12:38.527616 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-01-16 15:12:38.527621 | orchestrator | Thursday 16 January 2025 15:11:29 +0000 (0:00:03.764) 0:01:19.066 ****** 2025-01-16 15:12:38.527626 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:38.527631 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:38.527636 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:38.527641 | orchestrator | 2025-01-16 15:12:38.527646 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-01-16 15:12:38.527651 | orchestrator | Thursday 16 January 2025 15:11:30 +0000 (0:00:00.995) 0:01:20.062 ****** 2025-01-16 15:12:38.527656 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-01-16 15:12:38.527661 | orchestrator | 2025-01-16 15:12:38.527666 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-01-16 15:12:38.527671 | orchestrator | Thursday 16 January 2025 15:11:32 +0000 (0:00:01.811) 0:01:21.873 ****** 2025-01-16 15:12:38.527676 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-01-16 15:12:38.527681 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-01-16 15:12:38.527686 | orchestrator | 2025-01-16 15:12:38.527691 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-01-16 15:12:38.527696 | orchestrator | Thursday 16 January 2025 15:11:34 +0000 (0:00:01.757) 0:01:23.630 ****** 2025-01-16 15:12:38.527704 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:38.527709 | orchestrator | 2025-01-16 15:12:38.527714 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-01-16 15:12:38.527719 | orchestrator | Thursday 16 January 2025 15:11:43 +0000 (0:00:09.059) 0:01:32.690 ****** 2025-01-16 15:12:38.527724 | orchestrator | 2025-01-16 15:12:38.527729 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-01-16 15:12:38.527734 | orchestrator | Thursday 16 January 2025 15:11:43 +0000 (0:00:00.082) 0:01:32.773 ****** 2025-01-16 15:12:38.527739 | orchestrator | 2025-01-16 15:12:38.527743 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-01-16 15:12:38.527753 | orchestrator | Thursday 16 January 2025 15:11:43 +0000 (0:00:00.091) 0:01:32.864 ****** 2025-01-16 15:12:38.527758 | orchestrator | 2025-01-16 15:12:38.527763 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-01-16 15:12:38.527768 | orchestrator | Thursday 16 January 2025 15:11:43 +0000 (0:00:00.102) 0:01:32.966 ****** 2025-01-16 15:12:38.527772 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:12:38.527777 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:12:38.527782 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:38.527787 | orchestrator | 2025-01-16 15:12:38.527792 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-01-16 15:12:38.527796 | orchestrator | Thursday 16 January 2025 15:11:54 +0000 (0:00:10.396) 0:01:43.363 ****** 2025-01-16 15:12:38.527801 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:38.527806 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:12:38.527811 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:12:38.527816 | orchestrator | 2025-01-16 15:12:38.527821 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-01-16 15:12:38.527826 | orchestrator | Thursday 16 January 2025 15:11:59 +0000 (0:00:05.054) 0:01:48.417 ****** 2025-01-16 15:12:38.527830 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:38.527835 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:12:38.527840 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:12:38.527845 | orchestrator | 2025-01-16 15:12:38.527850 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-01-16 15:12:38.527855 | orchestrator | Thursday 16 January 2025 15:12:09 +0000 (0:00:10.658) 0:01:59.076 ****** 2025-01-16 15:12:38.527859 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:12:38.527864 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:38.527869 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:12:38.527874 | orchestrator | 2025-01-16 15:12:38.527879 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-01-16 15:12:38.527884 | orchestrator | Thursday 16 January 2025 15:12:15 +0000 (0:00:05.886) 0:02:04.962 ****** 2025-01-16 15:12:38.527888 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:12:38.527893 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:12:38.527898 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:38.527903 | orchestrator | 2025-01-16 15:12:38.527908 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-01-16 15:12:38.527913 | orchestrator | Thursday 16 January 2025 15:12:25 +0000 (0:00:09.668) 0:02:14.631 ****** 2025-01-16 15:12:38.527918 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:12:38.527925 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:41.545121 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:12:41.545236 | orchestrator | 2025-01-16 15:12:41.545253 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-01-16 15:12:41.545265 | orchestrator | Thursday 16 January 2025 15:12:33 +0000 (0:00:07.644) 0:02:22.275 ****** 2025-01-16 15:12:41.545276 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:41.545286 | orchestrator | 2025-01-16 15:12:41.545297 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:12:41.545309 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-01-16 15:12:41.545348 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:12:41.545360 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:12:41.545370 | orchestrator | 2025-01-16 15:12:41.545380 | orchestrator | 2025-01-16 15:12:41.545391 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:12:41.545401 | orchestrator | Thursday 16 January 2025 15:12:37 +0000 (0:00:03.988) 0:02:26.264 ****** 2025-01-16 15:12:41.545412 | orchestrator | =============================================================================== 2025-01-16 15:12:41.545446 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.60s 2025-01-16 15:12:41.545457 | orchestrator | designate : Restart designate-central container ------------------------ 10.66s 2025-01-16 15:12:41.545467 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.40s 2025-01-16 15:12:41.545478 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.67s 2025-01-16 15:12:41.545488 | orchestrator | designate : Running Designate bootstrap container ----------------------- 9.06s 2025-01-16 15:12:41.545518 | orchestrator | designate : Restart designate-worker container -------------------------- 7.64s 2025-01-16 15:12:41.545529 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.11s 2025-01-16 15:12:41.545540 | orchestrator | designate : Restart designate-producer container ------------------------ 5.89s 2025-01-16 15:12:41.545550 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 5.69s 2025-01-16 15:12:41.545560 | orchestrator | designate : Copying over rndc.key --------------------------------------- 5.38s 2025-01-16 15:12:41.545571 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.32s 2025-01-16 15:12:41.545581 | orchestrator | designate : Restart designate-api container ----------------------------- 5.05s 2025-01-16 15:12:41.545591 | orchestrator | designate : Copying over config.json files for services ----------------- 5.04s 2025-01-16 15:12:41.545615 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 4.26s 2025-01-16 15:12:41.545627 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 3.99s 2025-01-16 15:12:41.545638 | orchestrator | designate : Check designate containers ---------------------------------- 3.76s 2025-01-16 15:12:41.545648 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 3.76s 2025-01-16 15:12:41.545658 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.01s 2025-01-16 15:12:41.545669 | orchestrator | service-ks-register : designate | Creating services --------------------- 2.58s 2025-01-16 15:12:41.545679 | orchestrator | service-ks-register : designate | Creating users ------------------------ 2.58s 2025-01-16 15:12:41.545691 | orchestrator | 2025-01-16 15:12:38 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:41.545703 | orchestrator | 2025-01-16 15:12:38 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:41.545715 | orchestrator | 2025-01-16 15:12:38 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:41.545726 | orchestrator | 2025-01-16 15:12:38 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:41.545753 | orchestrator | 2025-01-16 15:12:41 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:12:41.546711 | orchestrator | 2025-01-16 15:12:41 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:41.546740 | orchestrator | 2025-01-16 15:12:41 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:41.547197 | orchestrator | 2025-01-16 15:12:41 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:44.569817 | orchestrator | 2025-01-16 15:12:41 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:44.569926 | orchestrator | 2025-01-16 15:12:44 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:12:44.570181 | orchestrator | 2025-01-16 15:12:44 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:44.570206 | orchestrator | 2025-01-16 15:12:44 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:44.570902 | orchestrator | 2025-01-16 15:12:44 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:47.596757 | orchestrator | 2025-01-16 15:12:44 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:47.596897 | orchestrator | 2025-01-16 15:12:47 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:12:47.598383 | orchestrator | 2025-01-16 15:12:47 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:47.599910 | orchestrator | 2025-01-16 15:12:47 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:47.600205 | orchestrator | 2025-01-16 15:12:47 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:47.600376 | orchestrator | 2025-01-16 15:12:47 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:50.620379 | orchestrator | 2025-01-16 15:12:50 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:12:50.623769 | orchestrator | 2025-01-16 15:12:50 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:50.627615 | orchestrator | 2025-01-16 15:12:50 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:50.627732 | orchestrator | 2025-01-16 15:12:50 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state STARTED 2025-01-16 15:12:53.659401 | orchestrator | 2025-01-16 15:12:50 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:53.660537 | orchestrator | 2025-01-16 15:12:53 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:12:53.661882 | orchestrator | 2025-01-16 15:12:53 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:53.661969 | orchestrator | 2025-01-16 15:12:53 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:53.661983 | orchestrator | 2025-01-16 15:12:53 | INFO  | Task 5110340e-f357-40b5-94c9-71d10abb6c94 is in state SUCCESS 2025-01-16 15:12:53.662005 | orchestrator | 2025-01-16 15:12:53.662055 | orchestrator | 2025-01-16 15:12:53.662097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:12:53.662107 | orchestrator | 2025-01-16 15:12:53.662115 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:12:53.662124 | orchestrator | Thursday 16 January 2025 15:11:55 +0000 (0:00:00.563) 0:00:00.563 ****** 2025-01-16 15:12:53.662131 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:12:53.662142 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:12:53.662148 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:12:53.662153 | orchestrator | 2025-01-16 15:12:53.662158 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:12:53.662164 | orchestrator | Thursday 16 January 2025 15:11:56 +0000 (0:00:00.926) 0:00:01.490 ****** 2025-01-16 15:12:53.662169 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-01-16 15:12:53.662176 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-01-16 15:12:53.662181 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-01-16 15:12:53.662203 | orchestrator | 2025-01-16 15:12:53.662208 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-01-16 15:12:53.662213 | orchestrator | 2025-01-16 15:12:53.662218 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-01-16 15:12:53.662223 | orchestrator | Thursday 16 January 2025 15:11:57 +0000 (0:00:00.503) 0:00:01.994 ****** 2025-01-16 15:12:53.662228 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:12:53.662234 | orchestrator | 2025-01-16 15:12:53.662239 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-01-16 15:12:53.662244 | orchestrator | Thursday 16 January 2025 15:11:57 +0000 (0:00:00.489) 0:00:02.483 ****** 2025-01-16 15:12:53.662249 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-01-16 15:12:53.662254 | orchestrator | 2025-01-16 15:12:53.662259 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-01-16 15:12:53.662264 | orchestrator | Thursday 16 January 2025 15:12:00 +0000 (0:00:02.404) 0:00:04.887 ****** 2025-01-16 15:12:53.662269 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-01-16 15:12:53.662274 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-01-16 15:12:53.662279 | orchestrator | 2025-01-16 15:12:53.662284 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-01-16 15:12:53.662289 | orchestrator | Thursday 16 January 2025 15:12:04 +0000 (0:00:04.584) 0:00:09.472 ****** 2025-01-16 15:12:53.662294 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:12:53.662299 | orchestrator | 2025-01-16 15:12:53.662304 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-01-16 15:12:53.662309 | orchestrator | Thursday 16 January 2025 15:12:07 +0000 (0:00:02.320) 0:00:11.793 ****** 2025-01-16 15:12:53.662314 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:12:53.662319 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-01-16 15:12:53.662324 | orchestrator | 2025-01-16 15:12:53.662329 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-01-16 15:12:53.662334 | orchestrator | Thursday 16 January 2025 15:12:10 +0000 (0:00:02.866) 0:00:14.660 ****** 2025-01-16 15:12:53.662338 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:12:53.662344 | orchestrator | 2025-01-16 15:12:53.662349 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-01-16 15:12:53.662354 | orchestrator | Thursday 16 January 2025 15:12:12 +0000 (0:00:02.516) 0:00:17.176 ****** 2025-01-16 15:12:53.662359 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-01-16 15:12:53.662364 | orchestrator | 2025-01-16 15:12:53.662369 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-01-16 15:12:53.662374 | orchestrator | Thursday 16 January 2025 15:12:15 +0000 (0:00:03.126) 0:00:20.303 ****** 2025-01-16 15:12:53.662379 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:53.662384 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:53.662390 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:53.662395 | orchestrator | 2025-01-16 15:12:53.662400 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-01-16 15:12:53.662405 | orchestrator | Thursday 16 January 2025 15:12:16 +0000 (0:00:00.782) 0:00:21.085 ****** 2025-01-16 15:12:53.662412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662469 | orchestrator | 2025-01-16 15:12:53.662474 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-01-16 15:12:53.662482 | orchestrator | Thursday 16 January 2025 15:12:17 +0000 (0:00:01.219) 0:00:22.305 ****** 2025-01-16 15:12:53.662487 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:53.662492 | orchestrator | 2025-01-16 15:12:53.662512 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-01-16 15:12:53.662517 | orchestrator | Thursday 16 January 2025 15:12:18 +0000 (0:00:00.524) 0:00:22.829 ****** 2025-01-16 15:12:53.662522 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:53.662527 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:53.662532 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:53.662537 | orchestrator | 2025-01-16 15:12:53.662542 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-01-16 15:12:53.662547 | orchestrator | Thursday 16 January 2025 15:12:18 +0000 (0:00:00.577) 0:00:23.406 ****** 2025-01-16 15:12:53.662552 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:12:53.662557 | orchestrator | 2025-01-16 15:12:53.662561 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-01-16 15:12:53.662566 | orchestrator | Thursday 16 January 2025 15:12:19 +0000 (0:00:00.889) 0:00:24.295 ****** 2025-01-16 15:12:53.662572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662596 | orchestrator | 2025-01-16 15:12:53.662601 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-01-16 15:12:53.662606 | orchestrator | Thursday 16 January 2025 15:12:21 +0000 (0:00:02.161) 0:00:26.457 ****** 2025-01-16 15:12:53.662617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:12:53.662622 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:53.662627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:12:53.662635 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:53.662643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:12:53.662648 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:53.662653 | orchestrator | 2025-01-16 15:12:53.662658 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-01-16 15:12:53.662663 | orchestrator | Thursday 16 January 2025 15:12:22 +0000 (0:00:00.728) 0:00:27.186 ****** 2025-01-16 15:12:53.662668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:12:53.662673 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:53.662683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:12:53.662688 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:53.662693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:12:53.662701 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:53.662706 | orchestrator | 2025-01-16 15:12:53.662711 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-01-16 15:12:53.662716 | orchestrator | Thursday 16 January 2025 15:12:23 +0000 (0:00:01.244) 0:00:28.430 ****** 2025-01-16 15:12:53.662725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662747 | orchestrator | 2025-01-16 15:12:53.662752 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-01-16 15:12:53.662757 | orchestrator | Thursday 16 January 2025 15:12:25 +0000 (0:00:01.783) 0:00:30.214 ****** 2025-01-16 15:12:53.662766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662785 | orchestrator | 2025-01-16 15:12:53.662790 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-01-16 15:12:53.662794 | orchestrator | Thursday 16 January 2025 15:12:29 +0000 (0:00:04.015) 0:00:34.230 ****** 2025-01-16 15:12:53.662800 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-01-16 15:12:53.662805 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-01-16 15:12:53.662809 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-01-16 15:12:53.662814 | orchestrator | 2025-01-16 15:12:53.662819 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-01-16 15:12:53.662824 | orchestrator | Thursday 16 January 2025 15:12:31 +0000 (0:00:02.147) 0:00:36.377 ****** 2025-01-16 15:12:53.662829 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:53.662835 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:12:53.662840 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:12:53.662845 | orchestrator | 2025-01-16 15:12:53.662849 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-01-16 15:12:53.662858 | orchestrator | Thursday 16 January 2025 15:12:33 +0000 (0:00:01.281) 0:00:37.659 ****** 2025-01-16 15:12:53.662863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:12:53.662868 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:12:53.662879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:12:53.662884 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:12:53.662894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-01-16 15:12:53.662900 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:12:53.662904 | orchestrator | 2025-01-16 15:12:53.662909 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-01-16 15:12:53.662914 | orchestrator | Thursday 16 January 2025 15:12:34 +0000 (0:00:01.293) 0:00:38.953 ****** 2025-01-16 15:12:53.662919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-01-16 15:12:53.662946 | orchestrator | 2025-01-16 15:12:53.662951 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-01-16 15:12:53.662956 | orchestrator | Thursday 16 January 2025 15:12:35 +0000 (0:00:01.111) 0:00:40.064 ****** 2025-01-16 15:12:53.662961 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:53.662966 | orchestrator | 2025-01-16 15:12:53.662971 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-01-16 15:12:53.662976 | orchestrator | Thursday 16 January 2025 15:12:37 +0000 (0:00:02.162) 0:00:42.227 ****** 2025-01-16 15:12:53.662981 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:53.662988 | orchestrator | 2025-01-16 15:12:53.662993 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-01-16 15:12:53.662998 | orchestrator | Thursday 16 January 2025 15:12:39 +0000 (0:00:01.688) 0:00:43.916 ****** 2025-01-16 15:12:53.663003 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:53.663008 | orchestrator | 2025-01-16 15:12:53.663013 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-01-16 15:12:53.663017 | orchestrator | Thursday 16 January 2025 15:12:47 +0000 (0:00:08.656) 0:00:52.572 ****** 2025-01-16 15:12:53.663022 | orchestrator | 2025-01-16 15:12:53.663033 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-01-16 15:12:56.680324 | orchestrator | Thursday 16 January 2025 15:12:48 +0000 (0:00:00.041) 0:00:52.613 ****** 2025-01-16 15:12:56.680413 | orchestrator | 2025-01-16 15:12:56.680428 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-01-16 15:12:56.680435 | orchestrator | Thursday 16 January 2025 15:12:48 +0000 (0:00:00.121) 0:00:52.734 ****** 2025-01-16 15:12:56.680440 | orchestrator | 2025-01-16 15:12:56.680445 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-01-16 15:12:56.680451 | orchestrator | Thursday 16 January 2025 15:12:48 +0000 (0:00:00.041) 0:00:52.776 ****** 2025-01-16 15:12:56.680456 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:12:56.680482 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:12:56.680488 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:12:56.680493 | orchestrator | 2025-01-16 15:12:56.680545 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:12:56.680551 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:12:56.680559 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 15:12:56.680564 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 15:12:56.680569 | orchestrator | 2025-01-16 15:12:56.680574 | orchestrator | 2025-01-16 15:12:56.680579 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:12:56.680584 | orchestrator | Thursday 16 January 2025 15:12:51 +0000 (0:00:03.711) 0:00:56.488 ****** 2025-01-16 15:12:56.680589 | orchestrator | =============================================================================== 2025-01-16 15:12:56.680594 | orchestrator | placement : Running placement bootstrap container ----------------------- 8.66s 2025-01-16 15:12:56.680599 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 4.58s 2025-01-16 15:12:56.680604 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.02s 2025-01-16 15:12:56.680609 | orchestrator | placement : Restart placement-api container ----------------------------- 3.71s 2025-01-16 15:12:56.680614 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.13s 2025-01-16 15:12:56.680618 | orchestrator | service-ks-register : placement | Creating users ------------------------ 2.87s 2025-01-16 15:12:56.680623 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.52s 2025-01-16 15:12:56.680628 | orchestrator | service-ks-register : placement | Creating services --------------------- 2.40s 2025-01-16 15:12:56.680633 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.32s 2025-01-16 15:12:56.680638 | orchestrator | placement : Creating placement databases -------------------------------- 2.16s 2025-01-16 15:12:56.680643 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.16s 2025-01-16 15:12:56.680648 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.15s 2025-01-16 15:12:56.680652 | orchestrator | placement : Copying over config.json files for services ----------------- 1.78s 2025-01-16 15:12:56.680657 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.69s 2025-01-16 15:12:56.680662 | orchestrator | placement : Copying over existing policy file --------------------------- 1.29s 2025-01-16 15:12:56.680667 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.28s 2025-01-16 15:12:56.680671 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.24s 2025-01-16 15:12:56.680676 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.22s 2025-01-16 15:12:56.680681 | orchestrator | placement : Check placement containers ---------------------------------- 1.11s 2025-01-16 15:12:56.680688 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2025-01-16 15:12:56.680696 | orchestrator | 2025-01-16 15:12:53 | INFO  | Task 0fb484db-9804-470d-b1b7-8dc92d66c3b6 is in state STARTED 2025-01-16 15:12:56.680704 | orchestrator | 2025-01-16 15:12:53 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:56.680808 | orchestrator | 2025-01-16 15:12:56 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:12:56.680819 | orchestrator | 2025-01-16 15:12:56 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:12:56.680824 | orchestrator | 2025-01-16 15:12:56 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:56.680842 | orchestrator | 2025-01-16 15:12:56 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:56.681232 | orchestrator | 2025-01-16 15:12:56 | INFO  | Task 0fb484db-9804-470d-b1b7-8dc92d66c3b6 is in state SUCCESS 2025-01-16 15:12:56.681319 | orchestrator | 2025-01-16 15:12:56 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:12:59.703387 | orchestrator | 2025-01-16 15:12:59 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:12:59.703875 | orchestrator | 2025-01-16 15:12:59 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:12:59.703923 | orchestrator | 2025-01-16 15:12:59 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:12:59.704343 | orchestrator | 2025-01-16 15:12:59 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:12:59.704414 | orchestrator | 2025-01-16 15:12:59 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:02.729462 | orchestrator | 2025-01-16 15:13:02 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:02.729634 | orchestrator | 2025-01-16 15:13:02 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:02.729650 | orchestrator | 2025-01-16 15:13:02 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:02.730078 | orchestrator | 2025-01-16 15:13:02 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:02.730698 | orchestrator | 2025-01-16 15:13:02 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:05.763438 | orchestrator | 2025-01-16 15:13:05 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:05.765654 | orchestrator | 2025-01-16 15:13:05 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:05.766724 | orchestrator | 2025-01-16 15:13:05 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:05.766815 | orchestrator | 2025-01-16 15:13:05 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:08.786860 | orchestrator | 2025-01-16 15:13:05 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:08.787005 | orchestrator | 2025-01-16 15:13:08 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:11.818263 | orchestrator | 2025-01-16 15:13:08 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:11.818382 | orchestrator | 2025-01-16 15:13:08 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:11.818417 | orchestrator | 2025-01-16 15:13:08 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:11.818428 | orchestrator | 2025-01-16 15:13:08 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:11.818452 | orchestrator | 2025-01-16 15:13:11 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:14.836597 | orchestrator | 2025-01-16 15:13:11 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:14.836715 | orchestrator | 2025-01-16 15:13:11 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:14.836734 | orchestrator | 2025-01-16 15:13:11 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:14.836750 | orchestrator | 2025-01-16 15:13:11 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:14.836812 | orchestrator | 2025-01-16 15:13:14 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:14.840563 | orchestrator | 2025-01-16 15:13:14 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:14.840635 | orchestrator | 2025-01-16 15:13:14 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:17.868777 | orchestrator | 2025-01-16 15:13:14 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:17.868863 | orchestrator | 2025-01-16 15:13:14 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:17.868958 | orchestrator | 2025-01-16 15:13:17 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:17.868968 | orchestrator | 2025-01-16 15:13:17 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:17.868974 | orchestrator | 2025-01-16 15:13:17 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:17.868982 | orchestrator | 2025-01-16 15:13:17 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:20.899713 | orchestrator | 2025-01-16 15:13:17 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:20.899942 | orchestrator | 2025-01-16 15:13:20 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:20.900405 | orchestrator | 2025-01-16 15:13:20 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:20.900437 | orchestrator | 2025-01-16 15:13:20 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:20.900473 | orchestrator | 2025-01-16 15:13:20 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:23.928686 | orchestrator | 2025-01-16 15:13:20 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:23.928827 | orchestrator | 2025-01-16 15:13:23 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:23.929379 | orchestrator | 2025-01-16 15:13:23 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:23.930878 | orchestrator | 2025-01-16 15:13:23 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:23.931082 | orchestrator | 2025-01-16 15:13:23 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:26.954333 | orchestrator | 2025-01-16 15:13:23 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:26.954762 | orchestrator | 2025-01-16 15:13:26 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:29.978614 | orchestrator | 2025-01-16 15:13:26 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:29.978742 | orchestrator | 2025-01-16 15:13:26 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:29.978762 | orchestrator | 2025-01-16 15:13:26 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:29.978781 | orchestrator | 2025-01-16 15:13:26 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:29.978925 | orchestrator | 2025-01-16 15:13:29 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:29.979355 | orchestrator | 2025-01-16 15:13:29 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:29.979384 | orchestrator | 2025-01-16 15:13:29 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:29.979409 | orchestrator | 2025-01-16 15:13:29 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:29.979837 | orchestrator | 2025-01-16 15:13:29 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:33.004904 | orchestrator | 2025-01-16 15:13:33 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:33.005009 | orchestrator | 2025-01-16 15:13:33 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:33.005023 | orchestrator | 2025-01-16 15:13:33 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:33.005038 | orchestrator | 2025-01-16 15:13:33 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:36.029911 | orchestrator | 2025-01-16 15:13:33 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:36.030072 | orchestrator | 2025-01-16 15:13:36 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:36.030441 | orchestrator | 2025-01-16 15:13:36 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:36.030478 | orchestrator | 2025-01-16 15:13:36 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:36.030917 | orchestrator | 2025-01-16 15:13:36 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:36.031045 | orchestrator | 2025-01-16 15:13:36 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:39.058391 | orchestrator | 2025-01-16 15:13:39 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:39.058675 | orchestrator | 2025-01-16 15:13:39 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:39.058948 | orchestrator | 2025-01-16 15:13:39 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:39.061068 | orchestrator | 2025-01-16 15:13:39 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:42.094823 | orchestrator | 2025-01-16 15:13:39 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:42.094964 | orchestrator | 2025-01-16 15:13:42 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:42.097563 | orchestrator | 2025-01-16 15:13:42 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:42.097786 | orchestrator | 2025-01-16 15:13:42 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:45.128280 | orchestrator | 2025-01-16 15:13:42 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:45.128395 | orchestrator | 2025-01-16 15:13:42 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:45.128428 | orchestrator | 2025-01-16 15:13:45 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:45.128800 | orchestrator | 2025-01-16 15:13:45 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:45.128839 | orchestrator | 2025-01-16 15:13:45 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:45.129253 | orchestrator | 2025-01-16 15:13:45 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:48.162851 | orchestrator | 2025-01-16 15:13:45 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:48.162948 | orchestrator | 2025-01-16 15:13:48 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:48.164941 | orchestrator | 2025-01-16 15:13:48 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:48.164975 | orchestrator | 2025-01-16 15:13:48 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:48.164991 | orchestrator | 2025-01-16 15:13:48 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:48.165000 | orchestrator | 2025-01-16 15:13:48 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:51.193147 | orchestrator | 2025-01-16 15:13:51 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:51.193771 | orchestrator | 2025-01-16 15:13:51 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:51.194106 | orchestrator | 2025-01-16 15:13:51 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:51.194775 | orchestrator | 2025-01-16 15:13:51 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:51.194836 | orchestrator | 2025-01-16 15:13:51 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:54.224825 | orchestrator | 2025-01-16 15:13:54 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:54.226143 | orchestrator | 2025-01-16 15:13:54 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:54.226204 | orchestrator | 2025-01-16 15:13:54 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:54.226225 | orchestrator | 2025-01-16 15:13:54 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:13:57.247155 | orchestrator | 2025-01-16 15:13:54 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:13:57.247343 | orchestrator | 2025-01-16 15:13:57 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:13:57.247627 | orchestrator | 2025-01-16 15:13:57 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:13:57.247659 | orchestrator | 2025-01-16 15:13:57 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:13:57.247674 | orchestrator | 2025-01-16 15:13:57 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:14:00.281971 | orchestrator | 2025-01-16 15:13:57 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:00.282162 | orchestrator | 2025-01-16 15:14:00 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:00.282717 | orchestrator | 2025-01-16 15:14:00 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:14:00.282974 | orchestrator | 2025-01-16 15:14:00 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:14:03.310653 | orchestrator | 2025-01-16 15:14:00 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:14:03.310782 | orchestrator | 2025-01-16 15:14:00 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:03.310830 | orchestrator | 2025-01-16 15:14:03 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:06.334928 | orchestrator | 2025-01-16 15:14:03 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:14:06.335054 | orchestrator | 2025-01-16 15:14:03 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:14:06.335073 | orchestrator | 2025-01-16 15:14:03 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:14:06.335090 | orchestrator | 2025-01-16 15:14:03 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:06.335125 | orchestrator | 2025-01-16 15:14:06 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:06.335300 | orchestrator | 2025-01-16 15:14:06 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:14:06.336180 | orchestrator | 2025-01-16 15:14:06 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:14:06.336695 | orchestrator | 2025-01-16 15:14:06 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:14:09.356076 | orchestrator | 2025-01-16 15:14:06 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:09.356305 | orchestrator | 2025-01-16 15:14:09 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:09.356745 | orchestrator | 2025-01-16 15:14:09 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:14:09.356802 | orchestrator | 2025-01-16 15:14:09 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:14:09.357029 | orchestrator | 2025-01-16 15:14:09 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:14:12.386538 | orchestrator | 2025-01-16 15:14:09 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:12.386665 | orchestrator | 2025-01-16 15:14:12 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:15.420483 | orchestrator | 2025-01-16 15:14:12 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:14:15.420658 | orchestrator | 2025-01-16 15:14:12 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:14:15.420688 | orchestrator | 2025-01-16 15:14:12 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state STARTED 2025-01-16 15:14:15.420714 | orchestrator | 2025-01-16 15:14:12 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:15.420879 | orchestrator | 2025-01-16 15:14:15 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:15.420909 | orchestrator | 2025-01-16 15:14:15 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state STARTED 2025-01-16 15:14:15.421446 | orchestrator | 2025-01-16 15:14:15 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:14:15.421756 | orchestrator | 2025-01-16 15:14:15 | INFO  | Task 5fcfc9d1-b0cf-44ee-954a-4cbb06185e48 is in state SUCCESS 2025-01-16 15:14:15.421890 | orchestrator | 2025-01-16 15:14:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:15.423677 | orchestrator | 2025-01-16 15:14:15.423919 | orchestrator | 2025-01-16 15:14:15.423936 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:14:15.424345 | orchestrator | 2025-01-16 15:14:15.424404 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:14:15.424429 | orchestrator | Thursday 16 January 2025 15:12:54 +0000 (0:00:00.153) 0:00:00.153 ****** 2025-01-16 15:14:15.424462 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:14:15.424474 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:14:15.424732 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:14:15.424950 | orchestrator | 2025-01-16 15:14:15.424968 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:14:15.424977 | orchestrator | Thursday 16 January 2025 15:12:54 +0000 (0:00:00.180) 0:00:00.334 ****** 2025-01-16 15:14:15.424987 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-01-16 15:14:15.424997 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-01-16 15:14:15.425007 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-01-16 15:14:15.425016 | orchestrator | 2025-01-16 15:14:15.425026 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-01-16 15:14:15.425058 | orchestrator | 2025-01-16 15:14:15.425069 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-01-16 15:14:15.425078 | orchestrator | Thursday 16 January 2025 15:12:54 +0000 (0:00:00.394) 0:00:00.728 ****** 2025-01-16 15:14:15.425088 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:14:15.425097 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:14:15.425107 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:14:15.425117 | orchestrator | 2025-01-16 15:14:15.425126 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:14:15.425137 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:14:15.425148 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:14:15.425165 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:14:15.425175 | orchestrator | 2025-01-16 15:14:15.425185 | orchestrator | 2025-01-16 15:14:15.425204 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:14:15.425214 | orchestrator | Thursday 16 January 2025 15:12:55 +0000 (0:00:00.562) 0:00:01.291 ****** 2025-01-16 15:14:15.425224 | orchestrator | =============================================================================== 2025-01-16 15:14:15.425233 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.56s 2025-01-16 15:14:15.425243 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-01-16 15:14:15.425252 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.18s 2025-01-16 15:14:15.425262 | orchestrator | 2025-01-16 15:14:15.425272 | orchestrator | 2025-01-16 15:14:15.425282 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-01-16 15:14:15.425292 | orchestrator | 2025-01-16 15:14:15.425301 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-01-16 15:14:15.425311 | orchestrator | Thursday 16 January 2025 15:10:10 +0000 (0:00:00.108) 0:00:00.109 ****** 2025-01-16 15:14:15.425321 | orchestrator | changed: [localhost] 2025-01-16 15:14:15.425330 | orchestrator | 2025-01-16 15:14:15.425340 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-01-16 15:14:15.425349 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.497) 0:00:00.606 ****** 2025-01-16 15:14:15.425359 | orchestrator | changed: [localhost] 2025-01-16 15:14:15.425368 | orchestrator | 2025-01-16 15:14:15.425378 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-01-16 15:14:15.425388 | orchestrator | Thursday 16 January 2025 15:11:13 +0000 (0:01:02.635) 0:01:03.241 ****** 2025-01-16 15:14:15.425397 | orchestrator | changed: [localhost] 2025-01-16 15:14:15.425406 | orchestrator | 2025-01-16 15:14:15.425417 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:14:15.425426 | orchestrator | 2025-01-16 15:14:15.425437 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:14:15.425447 | orchestrator | Thursday 16 January 2025 15:11:19 +0000 (0:00:05.657) 0:01:08.898 ****** 2025-01-16 15:14:15.425457 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:14:15.425467 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:14:15.425477 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:14:15.425529 | orchestrator | 2025-01-16 15:14:15.425540 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:14:15.425550 | orchestrator | Thursday 16 January 2025 15:11:21 +0000 (0:00:02.058) 0:01:10.957 ****** 2025-01-16 15:14:15.425560 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_True) 2025-01-16 15:14:15.425570 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_True) 2025-01-16 15:14:15.425580 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_True) 2025-01-16 15:14:15.425590 | orchestrator | 2025-01-16 15:14:15.425601 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-01-16 15:14:15.425618 | orchestrator | 2025-01-16 15:14:15.425629 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-01-16 15:14:15.425640 | orchestrator | Thursday 16 January 2025 15:11:23 +0000 (0:00:01.882) 0:01:12.839 ****** 2025-01-16 15:14:15.425650 | orchestrator | included: /ansible/roles/ironic/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:14:15.425661 | orchestrator | 2025-01-16 15:14:15.425672 | orchestrator | TASK [service-ks-register : ironic | Creating services] ************************ 2025-01-16 15:14:15.425687 | orchestrator | Thursday 16 January 2025 15:11:24 +0000 (0:00:00.972) 0:01:13.812 ****** 2025-01-16 15:14:15.425698 | orchestrator | changed: [testbed-node-0] => (item=ironic (baremetal)) 2025-01-16 15:14:15.425709 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector (baremetal-introspection)) 2025-01-16 15:14:15.425721 | orchestrator | 2025-01-16 15:14:15.425771 | orchestrator | TASK [service-ks-register : ironic | Creating endpoints] *********************** 2025-01-16 15:14:15.425789 | orchestrator | Thursday 16 January 2025 15:11:29 +0000 (0:00:04.635) 0:01:18.448 ****** 2025-01-16 15:14:15.425801 | orchestrator | changed: [testbed-node-0] => (item=ironic -> https://api-int.testbed.osism.xyz:6385 -> internal) 2025-01-16 15:14:15.425812 | orchestrator | changed: [testbed-node-0] => (item=ironic -> https://api.testbed.osism.xyz:6385 -> public) 2025-01-16 15:14:15.425823 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> https://api-int.testbed.osism.xyz:5050 -> internal) 2025-01-16 15:14:15.425834 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> https://api.testbed.osism.xyz:5050 -> public) 2025-01-16 15:14:15.425844 | orchestrator | 2025-01-16 15:14:15.425855 | orchestrator | TASK [service-ks-register : ironic | Creating projects] ************************ 2025-01-16 15:14:15.425866 | orchestrator | Thursday 16 January 2025 15:11:37 +0000 (0:00:08.918) 0:01:27.367 ****** 2025-01-16 15:14:15.425876 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:14:15.425887 | orchestrator | 2025-01-16 15:14:15.425898 | orchestrator | TASK [service-ks-register : ironic | Creating users] *************************** 2025-01-16 15:14:15.425908 | orchestrator | Thursday 16 January 2025 15:11:40 +0000 (0:00:02.309) 0:01:29.676 ****** 2025-01-16 15:14:15.425919 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:14:15.425929 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service) 2025-01-16 15:14:15.425943 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service) 2025-01-16 15:14:15.425954 | orchestrator | 2025-01-16 15:14:15.425965 | orchestrator | TASK [service-ks-register : ironic | Creating roles] *************************** 2025-01-16 15:14:15.425976 | orchestrator | Thursday 16 January 2025 15:11:45 +0000 (0:00:05.354) 0:01:35.030 ****** 2025-01-16 15:14:15.425986 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:14:15.425996 | orchestrator | 2025-01-16 15:14:15.426005 | orchestrator | TASK [service-ks-register : ironic | Granting user roles] ********************** 2025-01-16 15:14:15.426062 | orchestrator | Thursday 16 January 2025 15:11:48 +0000 (0:00:02.605) 0:01:37.636 ****** 2025-01-16 15:14:15.426085 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service -> admin) 2025-01-16 15:14:15.426095 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service -> admin) 2025-01-16 15:14:15.426105 | orchestrator | changed: [testbed-node-0] => (item=ironic -> service -> service) 2025-01-16 15:14:15.426115 | orchestrator | changed: [testbed-node-0] => (item=ironic-inspector -> service -> service) 2025-01-16 15:14:15.426124 | orchestrator | 2025-01-16 15:14:15.426134 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-01-16 15:14:15.426144 | orchestrator | Thursday 16 January 2025 15:11:58 +0000 (0:00:10.712) 0:01:48.349 ****** 2025-01-16 15:14:15.426153 | orchestrator | changed: [testbed-node-1] => (item=iscsi_tcp) 2025-01-16 15:14:15.426163 | orchestrator | changed: [testbed-node-0] => (item=iscsi_tcp) 2025-01-16 15:14:15.426173 | orchestrator | changed: [testbed-node-2] => (item=iscsi_tcp) 2025-01-16 15:14:15.426189 | orchestrator | 2025-01-16 15:14:15.426199 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-01-16 15:14:15.426208 | orchestrator | Thursday 16 January 2025 15:11:59 +0000 (0:00:00.680) 0:01:49.029 ****** 2025-01-16 15:14:15.426217 | orchestrator | changed: [testbed-node-1] => (item=iscsi_tcp) 2025-01-16 15:14:15.426227 | orchestrator | changed: [testbed-node-0] => (item=iscsi_tcp) 2025-01-16 15:14:15.426236 | orchestrator | changed: [testbed-node-2] => (item=iscsi_tcp) 2025-01-16 15:14:15.426246 | orchestrator | 2025-01-16 15:14:15.426255 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-01-16 15:14:15.426265 | orchestrator | Thursday 16 January 2025 15:12:01 +0000 (0:00:01.829) 0:01:50.858 ****** 2025-01-16 15:14:15.426274 | orchestrator | skipping: [testbed-node-0] => (item=iscsi_tcp)  2025-01-16 15:14:15.426284 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.426294 | orchestrator | skipping: [testbed-node-1] => (item=iscsi_tcp)  2025-01-16 15:14:15.426303 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.426313 | orchestrator | skipping: [testbed-node-2] => (item=iscsi_tcp)  2025-01-16 15:14:15.426323 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.426332 | orchestrator | 2025-01-16 15:14:15.426342 | orchestrator | TASK [ironic : Ensuring config directories exist] ****************************** 2025-01-16 15:14:15.426352 | orchestrator | Thursday 16 January 2025 15:12:02 +0000 (0:00:01.286) 0:01:52.144 ****** 2025-01-16 15:14:15.426363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.426409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.426422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.426433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.426477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.426566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.426581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.426593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.426604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.426621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.426642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.426674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.426685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.426695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.426716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.426726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.426735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.426745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.426754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.426789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.426802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.426816 | orchestrator | 2025-01-16 15:14:15.426825 | orchestrator | TASK [ironic : Check if Ironic policies shall be overwritten] ****************** 2025-01-16 15:14:15.426834 | orchestrator | Thursday 16 January 2025 15:12:05 +0000 (0:00:02.816) 0:01:54.960 ****** 2025-01-16 15:14:15.426843 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.426852 | orchestrator | 2025-01-16 15:14:15.426861 | orchestrator | TASK [ironic : Check if Ironic Inspector policies shall be overwritten] ******** 2025-01-16 15:14:15.426870 | orchestrator | Thursday 16 January 2025 15:12:05 +0000 (0:00:00.141) 0:01:55.102 ****** 2025-01-16 15:14:15.426879 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.426887 | orchestrator | 2025-01-16 15:14:15.426896 | orchestrator | TASK [ironic : Set ironic policy file] ***************************************** 2025-01-16 15:14:15.426905 | orchestrator | Thursday 16 January 2025 15:12:05 +0000 (0:00:00.154) 0:01:55.257 ****** 2025-01-16 15:14:15.426913 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.426922 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.426931 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.426939 | orchestrator | 2025-01-16 15:14:15.426948 | orchestrator | TASK [ironic : Set ironic-inspector policy file] ******************************* 2025-01-16 15:14:15.426957 | orchestrator | Thursday 16 January 2025 15:12:06 +0000 (0:00:00.640) 0:01:55.897 ****** 2025-01-16 15:14:15.426965 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.426974 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.426982 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.426991 | orchestrator | 2025-01-16 15:14:15.427000 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-01-16 15:14:15.427008 | orchestrator | Thursday 16 January 2025 15:12:06 +0000 (0:00:00.436) 0:01:56.333 ****** 2025-01-16 15:14:15.427017 | orchestrator | included: /ansible/roles/ironic/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:14:15.427026 | orchestrator | 2025-01-16 15:14:15.427034 | orchestrator | TASK [service-cert-copy : ironic | Copying over extra CA certificates] ********* 2025-01-16 15:14:15.427043 | orchestrator | Thursday 16 January 2025 15:12:07 +0000 (0:00:00.430) 0:01:56.764 ****** 2025-01-16 15:14:15.427052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.427080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.427091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.427112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.427123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.427132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.427159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.427181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.427191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.427200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.427209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.427218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.427244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.427260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.427276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.427286 | orchestrator | 2025-01-16 15:14:15.427295 | orchestrator | TASK [service-cert-copy : ironic | Copying over backend internal TLS certificate] *** 2025-01-16 15:14:15.427307 | orchestrator | Thursday 16 January 2025 15:12:11 +0000 (0:00:04.041) 0:02:00.805 ****** 2025-01-16 15:14:15.427317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.427326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.427335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.427375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.427386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.427396 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.427405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.427414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.427423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.427463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.427475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.427500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.427510 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.427519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.427529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.427547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.427585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.427595 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.427609 | orchestrator | 2025-01-16 15:14:15.427624 | orchestrator | TASK [service-cert-copy : ironic | Copying over backend internal TLS key] ****** 2025-01-16 15:14:15.427639 | orchestrator | Thursday 16 January 2025 15:12:13 +0000 (0:00:01.860) 0:02:02.665 ****** 2025-01-16 15:14:15.427655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.427670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.427682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.427707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.427739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.427750 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.427759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.427769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.427779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.427797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.427812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.427821 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.427847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.427858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.427867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.427885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.427899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.427908 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.427917 | orchestrator | 2025-01-16 15:14:15.427926 | orchestrator | TASK [ironic : Copying over config.json files for services] ******************** 2025-01-16 15:14:15.427935 | orchestrator | Thursday 16 January 2025 15:12:14 +0000 (0:00:01.343) 0:02:04.009 ****** 2025-01-16 15:14:15.427963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.427974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.427984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.427993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.428014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.428046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.428057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.428067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.428089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.428099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.428129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.428140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.428149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.428158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.428168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.428181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.428198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.428226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.428236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.428245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.428255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.428264 | orchestrator | 2025-01-16 15:14:15.428273 | orchestrator | TASK [ironic : Copying over ironic.conf] *************************************** 2025-01-16 15:14:15.428282 | orchestrator | Thursday 16 January 2025 15:12:19 +0000 (0:00:05.101) 0:02:09.111 ****** 2025-01-16 15:14:15.428296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.428316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.428344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.428355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.428364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.428387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.428396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.428406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.428419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.428429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.428446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.428461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.428470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.428479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.428557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.428574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.428594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.428610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.428619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.428628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.428638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.428647 | orchestrator | 2025-01-16 15:14:15.428656 | orchestrator | TASK [ironic : Copying over inspector.conf] ************************************ 2025-01-16 15:14:15.428665 | orchestrator | Thursday 16 January 2025 15:12:26 +0000 (0:00:06.765) 0:02:15.876 ****** 2025-01-16 15:14:15.428677 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:15.428687 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.428696 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:15.428705 | orchestrator | 2025-01-16 15:14:15.428713 | orchestrator | TASK [ironic : Copying over dnsmasq.conf] ************************************** 2025-01-16 15:14:15.428722 | orchestrator | Thursday 16 January 2025 15:12:32 +0000 (0:00:05.905) 0:02:21.782 ****** 2025-01-16 15:14:15.428731 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/ironic/templates/ironic-dnsmasq.conf.j2)  2025-01-16 15:14:15.428740 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.428749 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/ironic/templates/ironic-dnsmasq.conf.j2)  2025-01-16 15:14:15.428758 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.428766 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/ironic/templates/ironic-dnsmasq.conf.j2)  2025-01-16 15:14:15.428775 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.428784 | orchestrator | 2025-01-16 15:14:15.428793 | orchestrator | TASK [ironic : Copying pxelinux.cfg default] *********************************** 2025-01-16 15:14:15.428801 | orchestrator | Thursday 16 January 2025 15:12:34 +0000 (0:00:02.438) 0:02:24.221 ****** 2025-01-16 15:14:15.428817 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/ironic/templates/pxelinux.default.j2)  2025-01-16 15:14:15.428826 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.428835 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/ironic/templates/pxelinux.default.j2)  2025-01-16 15:14:15.428844 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.428853 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/ironic/templates/pxelinux.default.j2)  2025-01-16 15:14:15.428861 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.428870 | orchestrator | 2025-01-16 15:14:15.428879 | orchestrator | TASK [ironic : Copying ironic-agent kernel and initramfs (PXE)] **************** 2025-01-16 15:14:15.428888 | orchestrator | Thursday 16 January 2025 15:12:36 +0000 (0:00:01.554) 0:02:25.775 ****** 2025-01-16 15:14:15.428897 | orchestrator | skipping: [testbed-node-0] => (item=ironic-agent.kernel)  2025-01-16 15:14:15.428906 | orchestrator | skipping: [testbed-node-2] => (item=ironic-agent.kernel)  2025-01-16 15:14:15.428915 | orchestrator | skipping: [testbed-node-1] => (item=ironic-agent.kernel)  2025-01-16 15:14:15.428924 | orchestrator | skipping: [testbed-node-2] => (item=ironic-agent.initramfs)  2025-01-16 15:14:15.428932 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.428940 | orchestrator | skipping: [testbed-node-0] => (item=ironic-agent.initramfs)  2025-01-16 15:14:15.428948 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.428957 | orchestrator | skipping: [testbed-node-1] => (item=ironic-agent.initramfs)  2025-01-16 15:14:15.428965 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.428973 | orchestrator | 2025-01-16 15:14:15.428981 | orchestrator | TASK [ironic : Copying ironic-agent kernel and initramfs (iPXE)] *************** 2025-01-16 15:14:15.428989 | orchestrator | Thursday 16 January 2025 15:12:39 +0000 (0:00:03.006) 0:02:28.782 ****** 2025-01-16 15:14:15.428997 | orchestrator | changed: [testbed-node-2] => (item=ironic-agent.kernel) 2025-01-16 15:14:15.429005 | orchestrator | changed: [testbed-node-1] => (item=ironic-agent.kernel) 2025-01-16 15:14:15.429013 | orchestrator | changed: [testbed-node-0] => (item=ironic-agent.kernel) 2025-01-16 15:14:15.429021 | orchestrator | changed: [testbed-node-1] => (item=ironic-agent.initramfs) 2025-01-16 15:14:15.429029 | orchestrator | changed: [testbed-node-2] => (item=ironic-agent.initramfs) 2025-01-16 15:14:15.429037 | orchestrator | changed: [testbed-node-0] => (item=ironic-agent.initramfs) 2025-01-16 15:14:15.429044 | orchestrator | 2025-01-16 15:14:15.429053 | orchestrator | TASK [ironic : Copying inspector.ipxe] ***************************************** 2025-01-16 15:14:15.429060 | orchestrator | Thursday 16 January 2025 15:12:46 +0000 (0:00:07.099) 0:02:35.881 ****** 2025-01-16 15:14:15.429068 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/ironic/templates/inspector.ipxe.j2) 2025-01-16 15:14:15.429077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/ironic/templates/inspector.ipxe.j2) 2025-01-16 15:14:15.429084 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/ironic/templates/inspector.ipxe.j2) 2025-01-16 15:14:15.429092 | orchestrator | 2025-01-16 15:14:15.429100 | orchestrator | TASK [ironic : Copying ironic-http-httpd.conf] ********************************* 2025-01-16 15:14:15.429108 | orchestrator | Thursday 16 January 2025 15:12:48 +0000 (0:00:01.614) 0:02:37.496 ****** 2025-01-16 15:14:15.429117 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/ironic/templates/ironic-http-httpd.conf.j2) 2025-01-16 15:14:15.429125 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/ironic/templates/ironic-http-httpd.conf.j2) 2025-01-16 15:14:15.429133 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/ironic/templates/ironic-http-httpd.conf.j2) 2025-01-16 15:14:15.429141 | orchestrator | 2025-01-16 15:14:15.429149 | orchestrator | TASK [ironic : Copying over ironic-prometheus-exporter-wsgi.conf] ************** 2025-01-16 15:14:15.429157 | orchestrator | Thursday 16 January 2025 15:12:49 +0000 (0:00:01.694) 0:02:39.190 ****** 2025-01-16 15:14:15.429165 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/ironic/templates/ironic-prometheus-exporter-wsgi.conf.j2)  2025-01-16 15:14:15.429177 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.429185 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/ironic/templates/ironic-prometheus-exporter-wsgi.conf.j2)  2025-01-16 15:14:15.429193 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.429201 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/ironic/templates/ironic-prometheus-exporter-wsgi.conf.j2)  2025-01-16 15:14:15.429213 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.429221 | orchestrator | 2025-01-16 15:14:15.429229 | orchestrator | TASK [ironic : Copying over existing Ironic policy file] *********************** 2025-01-16 15:14:15.429237 | orchestrator | Thursday 16 January 2025 15:12:50 +0000 (0:00:00.834) 0:02:40.024 ****** 2025-01-16 15:14:15.429246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.429261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.429271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.429280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.429292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.429306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.429315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.429323 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.429338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.429347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.429356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.429369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.429382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.429391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.429407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.429416 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.429424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.429433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.429451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.429470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.429478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.429503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.429513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.429521 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.429533 | orchestrator | 2025-01-16 15:14:15.429541 | orchestrator | TASK [ironic : Copying over existing Ironic Inspector policy file] ************* 2025-01-16 15:14:15.429549 | orchestrator | Thursday 16 January 2025 15:12:51 +0000 (0:00:00.540) 0:02:40.565 ****** 2025-01-16 15:14:15.429558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.429575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.429591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.429600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.429609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.429617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.429630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.429639 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.429654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.429667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.429676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.429684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.429698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.429706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.429714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.429723 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.429741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-01-16 15:14:15.429750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:15.429758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-01-16 15:14:15.429771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-01-16 15:14:15.429780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-01-16 15:14:15.429788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.429806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.429815 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.429823 | orchestrator | 2025-01-16 15:14:15.429831 | orchestrator | TASK [ironic : Copying over ironic-api-wsgi.conf] ****************************** 2025-01-16 15:14:15.429839 | orchestrator | Thursday 16 January 2025 15:12:51 +0000 (0:00:00.561) 0:02:41.126 ****** 2025-01-16 15:14:15.429847 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:15.429855 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.429863 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:15.429871 | orchestrator | 2025-01-16 15:14:15.429879 | orchestrator | TASK [ironic : Check ironic containers] **************************************** 2025-01-16 15:14:15.429887 | orchestrator | Thursday 16 January 2025 15:12:52 +0000 (0:00:01.281) 0:02:42.407 ****** 2025-01-16 15:14:15.429895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.429909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.429917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-01-16 15:14:15.429939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.429948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.429957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:15.429969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.429978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.429997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-01-16 15:14:15.430006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.430056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.430071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}}) 2025-01-16 15:14:15.430080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.430088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.430097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.430121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.430130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.430139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.430152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}}) 2025-01-16 15:14:15.430160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-01-16 15:14:15.430169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic/metrics'], 'dimensions': {}}})  2025-01-16 15:14:15.430177 | orchestrator | 2025-01-16 15:14:15.430185 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-01-16 15:14:15.430194 | orchestrator | Thursday 16 January 2025 15:12:56 +0000 (0:00:03.133) 0:02:45.541 ****** 2025-01-16 15:14:15.430202 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:15.430210 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:15.430219 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:15.430227 | orchestrator | 2025-01-16 15:14:15.430235 | orchestrator | TASK [ironic : Creating Ironic database] *************************************** 2025-01-16 15:14:15.430243 | orchestrator | Thursday 16 January 2025 15:12:56 +0000 (0:00:00.208) 0:02:45.750 ****** 2025-01-16 15:14:15.430251 | orchestrator | changed: [testbed-node-0] => (item={'database_name': 'ironic', 'group': 'ironic-api'}) 2025-01-16 15:14:15.430260 | orchestrator | changed: [testbed-node-0] => (item={'database_name': 'ironic_inspector', 'group': 'ironic-inspector'}) 2025-01-16 15:14:15.430268 | orchestrator | 2025-01-16 15:14:15.430276 | orchestrator | TASK [ironic : Creating Ironic database user and setting permissions] ********** 2025-01-16 15:14:15.430284 | orchestrator | Thursday 16 January 2025 15:12:59 +0000 (0:00:03.154) 0:02:48.904 ****** 2025-01-16 15:14:15.430293 | orchestrator | changed: [testbed-node-0] => (item=ironic) 2025-01-16 15:14:15.430300 | orchestrator | changed: [testbed-node-0] => (item=ironic_inspector) 2025-01-16 15:14:15.430309 | orchestrator | 2025-01-16 15:14:15.430317 | orchestrator | TASK [ironic : Running Ironic bootstrap container] ***************************** 2025-01-16 15:14:15.430325 | orchestrator | Thursday 16 January 2025 15:13:02 +0000 (0:00:03.129) 0:02:52.034 ****** 2025-01-16 15:14:15.430336 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.430344 | orchestrator | 2025-01-16 15:14:15.430353 | orchestrator | TASK [ironic : Running Ironic Inspector bootstrap container] ******************* 2025-01-16 15:14:15.430362 | orchestrator | Thursday 16 January 2025 15:13:14 +0000 (0:00:11.693) 0:03:03.728 ****** 2025-01-16 15:14:15.430370 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.430378 | orchestrator | 2025-01-16 15:14:15.430387 | orchestrator | TASK [ironic : Running ironic-tftp bootstrap container] ************************ 2025-01-16 15:14:15.430401 | orchestrator | Thursday 16 January 2025 15:13:20 +0000 (0:00:06.698) 0:03:10.426 ****** 2025-01-16 15:14:15.430409 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.430417 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:15.430425 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:15.430433 | orchestrator | 2025-01-16 15:14:15.430441 | orchestrator | TASK [ironic : Flush handlers] ************************************************* 2025-01-16 15:14:15.430449 | orchestrator | Thursday 16 January 2025 15:13:31 +0000 (0:00:10.607) 0:03:21.034 ****** 2025-01-16 15:14:15.430457 | orchestrator | 2025-01-16 15:14:15.430465 | orchestrator | TASK [ironic : Flush handlers] ************************************************* 2025-01-16 15:14:15.430474 | orchestrator | Thursday 16 January 2025 15:13:31 +0000 (0:00:00.176) 0:03:21.217 ****** 2025-01-16 15:14:15.430481 | orchestrator | 2025-01-16 15:14:15.430505 | orchestrator | TASK [ironic : Flush handlers] ************************************************* 2025-01-16 15:14:15.430514 | orchestrator | Thursday 16 January 2025 15:13:31 +0000 (0:00:00.118) 0:03:21.335 ****** 2025-01-16 15:14:15.430523 | orchestrator | 2025-01-16 15:14:15.430531 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-conductor container] ****************** 2025-01-16 15:14:15.430540 | orchestrator | Thursday 16 January 2025 15:13:31 +0000 (0:00:00.052) 0:03:21.388 ****** 2025-01-16 15:14:15.430548 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.430556 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:15.430564 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:15.430572 | orchestrator | 2025-01-16 15:14:15.430580 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-api container] ************************ 2025-01-16 15:14:15.430588 | orchestrator | Thursday 16 January 2025 15:13:46 +0000 (0:00:14.414) 0:03:35.802 ****** 2025-01-16 15:14:15.430596 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:15.430604 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.430612 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:15.430620 | orchestrator | 2025-01-16 15:14:15.430628 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-inspector container] ****************** 2025-01-16 15:14:15.430636 | orchestrator | Thursday 16 January 2025 15:13:57 +0000 (0:00:11.328) 0:03:47.131 ****** 2025-01-16 15:14:15.430644 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.430652 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:15.430660 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:15.430668 | orchestrator | 2025-01-16 15:14:15.430676 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-tftp container] *********************** 2025-01-16 15:14:15.430684 | orchestrator | Thursday 16 January 2025 15:14:02 +0000 (0:00:04.386) 0:03:51.517 ****** 2025-01-16 15:14:15.430693 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:15.430701 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.430709 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:15.430717 | orchestrator | 2025-01-16 15:14:15.430725 | orchestrator | RUNNING HANDLER [ironic : Restart ironic-http container] *********************** 2025-01-16 15:14:15.430733 | orchestrator | Thursday 16 January 2025 15:14:09 +0000 (0:00:07.743) 0:03:59.261 ****** 2025-01-16 15:14:15.430741 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:15.430749 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:15.430757 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:15.430765 | orchestrator | 2025-01-16 15:14:15.430773 | orchestrator | TASK [ironic : Flush and delete ironic-inspector iptables chain] *************** 2025-01-16 15:14:15.430781 | orchestrator | Thursday 16 January 2025 15:14:13 +0000 (0:00:03.257) 0:04:02.519 ****** 2025-01-16 15:14:15.430789 | orchestrator | ok: [testbed-node-0] => (item=flush) 2025-01-16 15:14:15.430797 | orchestrator | ok: [testbed-node-2] => (item=flush) 2025-01-16 15:14:15.430805 | orchestrator | ok: [testbed-node-1] => (item=flush) 2025-01-16 15:14:15.430813 | orchestrator | ok: [testbed-node-0] => (item=delete-chain) 2025-01-16 15:14:15.430821 | orchestrator | ok: [testbed-node-2] => (item=delete-chain) 2025-01-16 15:14:15.430829 | orchestrator | ok: [testbed-node-1] => (item=delete-chain) 2025-01-16 15:14:15.430837 | orchestrator | 2025-01-16 15:14:15.430850 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:14:15.430859 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:14:15.430868 | orchestrator | testbed-node-0 : ok=33  changed=26  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-01-16 15:14:15.430880 | orchestrator | testbed-node-1 : ok=23  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-01-16 15:14:15.430888 | orchestrator | testbed-node-2 : ok=23  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-01-16 15:14:15.430897 | orchestrator | 2025-01-16 15:14:15.430905 | orchestrator | 2025-01-16 15:14:15.430913 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:14:15.430925 | orchestrator | Thursday 16 January 2025 15:14:14 +0000 (0:00:01.341) 0:04:03.860 ****** 2025-01-16 15:14:15.430933 | orchestrator | =============================================================================== 2025-01-16 15:14:15.430942 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 62.64s 2025-01-16 15:14:15.430950 | orchestrator | ironic : Restart ironic-conductor container ---------------------------- 14.41s 2025-01-16 15:14:15.430958 | orchestrator | ironic : Running Ironic bootstrap container ---------------------------- 11.69s 2025-01-16 15:14:15.430970 | orchestrator | ironic : Restart ironic-api container ---------------------------------- 11.33s 2025-01-16 15:14:18.443070 | orchestrator | service-ks-register : ironic | Granting user roles --------------------- 10.71s 2025-01-16 15:14:18.443214 | orchestrator | ironic : Running ironic-tftp bootstrap container ----------------------- 10.61s 2025-01-16 15:14:18.443242 | orchestrator | service-ks-register : ironic | Creating endpoints ----------------------- 8.92s 2025-01-16 15:14:18.443261 | orchestrator | ironic : Restart ironic-tftp container ---------------------------------- 7.74s 2025-01-16 15:14:18.443279 | orchestrator | ironic : Copying ironic-agent kernel and initramfs (iPXE) --------------- 7.10s 2025-01-16 15:14:18.443297 | orchestrator | ironic : Copying over ironic.conf --------------------------------------- 6.77s 2025-01-16 15:14:18.443314 | orchestrator | ironic : Running Ironic Inspector bootstrap container ------------------- 6.70s 2025-01-16 15:14:18.443333 | orchestrator | ironic : Copying over inspector.conf ------------------------------------ 5.91s 2025-01-16 15:14:18.443353 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.66s 2025-01-16 15:14:18.443370 | orchestrator | service-ks-register : ironic | Creating users --------------------------- 5.36s 2025-01-16 15:14:18.443389 | orchestrator | ironic : Copying over config.json files for services -------------------- 5.10s 2025-01-16 15:14:18.443407 | orchestrator | service-ks-register : ironic | Creating services ------------------------ 4.64s 2025-01-16 15:14:18.443426 | orchestrator | ironic : Restart ironic-inspector container ----------------------------- 4.39s 2025-01-16 15:14:18.443444 | orchestrator | service-cert-copy : ironic | Copying over extra CA certificates --------- 4.04s 2025-01-16 15:14:18.443463 | orchestrator | ironic : Restart ironic-http container ---------------------------------- 3.26s 2025-01-16 15:14:18.443482 | orchestrator | ironic : Creating Ironic database --------------------------------------- 3.15s 2025-01-16 15:14:18.443551 | orchestrator | 2025-01-16 15:14:18 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:18.445162 | orchestrator | 2025-01-16 15:14:18 | INFO  | Task de3dd6ef-d077-4cd3-93db-fc2b15a456b4 is in state SUCCESS 2025-01-16 15:14:18.445249 | orchestrator | 2025-01-16 15:14:18.445382 | orchestrator | 2025-01-16 15:14:18.445408 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:14:18.445428 | orchestrator | 2025-01-16 15:14:18.445445 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:14:18.445464 | orchestrator | Thursday 16 January 2025 15:12:41 +0000 (0:00:00.209) 0:00:00.209 ****** 2025-01-16 15:14:18.445549 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:14:18.445571 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:14:18.445589 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:14:18.445607 | orchestrator | 2025-01-16 15:14:18.445624 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:14:18.445641 | orchestrator | Thursday 16 January 2025 15:12:41 +0000 (0:00:00.264) 0:00:00.473 ****** 2025-01-16 15:14:18.445658 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-01-16 15:14:18.445675 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-01-16 15:14:18.445692 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-01-16 15:14:18.445709 | orchestrator | 2025-01-16 15:14:18.445727 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-01-16 15:14:18.445745 | orchestrator | 2025-01-16 15:14:18.445763 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-01-16 15:14:18.445782 | orchestrator | Thursday 16 January 2025 15:12:41 +0000 (0:00:00.254) 0:00:00.728 ****** 2025-01-16 15:14:18.445802 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:14:18.445822 | orchestrator | 2025-01-16 15:14:18.445842 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-01-16 15:14:18.445862 | orchestrator | Thursday 16 January 2025 15:12:42 +0000 (0:00:00.496) 0:00:01.224 ****** 2025-01-16 15:14:18.445884 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-01-16 15:14:18.445902 | orchestrator | 2025-01-16 15:14:18.445937 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-01-16 15:14:18.445958 | orchestrator | Thursday 16 January 2025 15:12:44 +0000 (0:00:02.570) 0:00:03.794 ****** 2025-01-16 15:14:18.445977 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-01-16 15:14:18.445998 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-01-16 15:14:18.446081 | orchestrator | 2025-01-16 15:14:18.446103 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-01-16 15:14:18.446122 | orchestrator | Thursday 16 January 2025 15:12:49 +0000 (0:00:04.782) 0:00:08.577 ****** 2025-01-16 15:14:18.446141 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:14:18.446159 | orchestrator | 2025-01-16 15:14:18.446179 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-01-16 15:14:18.446198 | orchestrator | Thursday 16 January 2025 15:12:52 +0000 (0:00:02.443) 0:00:11.021 ****** 2025-01-16 15:14:18.446217 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:14:18.446235 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-01-16 15:14:18.446254 | orchestrator | 2025-01-16 15:14:18.446273 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-01-16 15:14:18.446293 | orchestrator | Thursday 16 January 2025 15:12:54 +0000 (0:00:02.807) 0:00:13.828 ****** 2025-01-16 15:14:18.446314 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:14:18.446335 | orchestrator | 2025-01-16 15:14:18.446356 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-01-16 15:14:18.446377 | orchestrator | Thursday 16 January 2025 15:12:57 +0000 (0:00:02.353) 0:00:16.182 ****** 2025-01-16 15:14:18.446398 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-01-16 15:14:18.446418 | orchestrator | 2025-01-16 15:14:18.446439 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-01-16 15:14:18.446459 | orchestrator | Thursday 16 January 2025 15:13:00 +0000 (0:00:03.040) 0:00:19.222 ****** 2025-01-16 15:14:18.446477 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:18.446517 | orchestrator | 2025-01-16 15:14:18.446533 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-01-16 15:14:18.446547 | orchestrator | Thursday 16 January 2025 15:13:02 +0000 (0:00:02.432) 0:00:21.655 ****** 2025-01-16 15:14:18.446577 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:18.446593 | orchestrator | 2025-01-16 15:14:18.446609 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-01-16 15:14:18.446625 | orchestrator | Thursday 16 January 2025 15:13:05 +0000 (0:00:02.702) 0:00:24.358 ****** 2025-01-16 15:14:18.446639 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:18.446654 | orchestrator | 2025-01-16 15:14:18.446669 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-01-16 15:14:18.446686 | orchestrator | Thursday 16 January 2025 15:13:07 +0000 (0:00:02.511) 0:00:26.870 ****** 2025-01-16 15:14:18.446729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.446792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.446815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.446835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.446868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.446898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.446916 | orchestrator | 2025-01-16 15:14:18.446934 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-01-16 15:14:18.446950 | orchestrator | Thursday 16 January 2025 15:13:09 +0000 (0:00:01.293) 0:00:28.163 ****** 2025-01-16 15:14:18.446966 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:18.446982 | orchestrator | 2025-01-16 15:14:18.446999 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-01-16 15:14:18.447016 | orchestrator | Thursday 16 January 2025 15:13:09 +0000 (0:00:00.076) 0:00:28.240 ****** 2025-01-16 15:14:18.447033 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:18.447050 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:18.447068 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:18.447085 | orchestrator | 2025-01-16 15:14:18.447104 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-01-16 15:14:18.447331 | orchestrator | Thursday 16 January 2025 15:13:09 +0000 (0:00:00.208) 0:00:28.449 ****** 2025-01-16 15:14:18.447356 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:14:18.447374 | orchestrator | 2025-01-16 15:14:18.447391 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-01-16 15:14:18.447408 | orchestrator | Thursday 16 January 2025 15:13:10 +0000 (0:00:00.553) 0:00:29.002 ****** 2025-01-16 15:14:18.447451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.447475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.447577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.447616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.447638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.447676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.447697 | orchestrator | 2025-01-16 15:14:18.447726 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-01-16 15:14:18.447747 | orchestrator | Thursday 16 January 2025 15:13:11 +0000 (0:00:01.738) 0:00:30.741 ****** 2025-01-16 15:14:18.447766 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:14:18.447785 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:14:18.447801 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:14:18.447888 | orchestrator | 2025-01-16 15:14:18.447909 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-01-16 15:14:18.447926 | orchestrator | Thursday 16 January 2025 15:13:12 +0000 (0:00:00.475) 0:00:31.216 ****** 2025-01-16 15:14:18.447941 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:14:18.447958 | orchestrator | 2025-01-16 15:14:18.447976 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-01-16 15:14:18.447993 | orchestrator | Thursday 16 January 2025 15:13:12 +0000 (0:00:00.502) 0:00:31.719 ****** 2025-01-16 15:14:18.448009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.448039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.448055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.448071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.448105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.448116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.448126 | orchestrator | 2025-01-16 15:14:18.448135 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-01-16 15:14:18.448145 | orchestrator | Thursday 16 January 2025 15:13:14 +0000 (0:00:01.747) 0:00:33.467 ****** 2025-01-16 15:14:18.448165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:14:18.448180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:18.448190 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:18.448209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:14:18.448225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:18.448234 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:18.448244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:14:18.448261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:18.448271 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:18.448281 | orchestrator | 2025-01-16 15:14:18.448290 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-01-16 15:14:18.448300 | orchestrator | Thursday 16 January 2025 15:13:15 +0000 (0:00:00.684) 0:00:34.151 ****** 2025-01-16 15:14:18.448309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:14:18.448336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:18.448345 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:18.448354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:14:18.448363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:18.448372 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:18.448388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:14:18.448409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:18.448419 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:18.448427 | orchestrator | 2025-01-16 15:14:18.448436 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-01-16 15:14:18.448445 | orchestrator | Thursday 16 January 2025 15:13:17 +0000 (0:00:02.347) 0:00:36.498 ****** 2025-01-16 15:14:18.448454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.448465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.448538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.448556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.448590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.448605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.448618 | orchestrator | 2025-01-16 15:14:18.448632 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-01-16 15:14:18.448648 | orchestrator | Thursday 16 January 2025 15:13:20 +0000 (0:00:02.725) 0:00:39.224 ****** 2025-01-16 15:14:18.448663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.448688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.448724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.448740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.448754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.448769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.448781 | orchestrator | 2025-01-16 15:14:18.448795 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-01-16 15:14:18.448809 | orchestrator | Thursday 16 January 2025 15:13:31 +0000 (0:00:10.775) 0:00:50.000 ****** 2025-01-16 15:14:18.448848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:14:18.448884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:18.448898 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:18.448913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:14:18.448926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:18.448940 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:18.448954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-01-16 15:14:18.448987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:14:18.449011 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:18.449027 | orchestrator | 2025-01-16 15:14:18.449040 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-01-16 15:14:18.449055 | orchestrator | Thursday 16 January 2025 15:13:32 +0000 (0:00:01.459) 0:00:51.460 ****** 2025-01-16 15:14:18.449069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.449084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.449098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-01-16 15:14:18.449121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.449153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.449169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:18.449182 | orchestrator | 2025-01-16 15:14:18.449196 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-01-16 15:14:18.449209 | orchestrator | Thursday 16 January 2025 15:13:35 +0000 (0:00:03.425) 0:00:54.885 ****** 2025-01-16 15:14:18.449222 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:18.449237 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:18.449251 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:18.449265 | orchestrator | 2025-01-16 15:14:18.449278 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-01-16 15:14:18.449292 | orchestrator | Thursday 16 January 2025 15:13:36 +0000 (0:00:00.618) 0:00:55.503 ****** 2025-01-16 15:14:18.449305 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:18.449318 | orchestrator | 2025-01-16 15:14:18.449332 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-01-16 15:14:18.449345 | orchestrator | Thursday 16 January 2025 15:13:38 +0000 (0:00:02.141) 0:00:57.644 ****** 2025-01-16 15:14:18.449360 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:18.449373 | orchestrator | 2025-01-16 15:14:18.449386 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-01-16 15:14:18.449400 | orchestrator | Thursday 16 January 2025 15:13:40 +0000 (0:00:01.728) 0:00:59.373 ****** 2025-01-16 15:14:18.449414 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:18.449428 | orchestrator | 2025-01-16 15:14:18.449442 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-01-16 15:14:18.449456 | orchestrator | Thursday 16 January 2025 15:13:50 +0000 (0:00:10.440) 0:01:09.813 ****** 2025-01-16 15:14:18.449469 | orchestrator | 2025-01-16 15:14:18.449507 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-01-16 15:14:18.449524 | orchestrator | Thursday 16 January 2025 15:13:50 +0000 (0:00:00.085) 0:01:09.899 ****** 2025-01-16 15:14:18.449537 | orchestrator | 2025-01-16 15:14:18.449551 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-01-16 15:14:18.449564 | orchestrator | Thursday 16 January 2025 15:13:51 +0000 (0:00:00.242) 0:01:10.141 ****** 2025-01-16 15:14:18.449586 | orchestrator | 2025-01-16 15:14:18.449600 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-01-16 15:14:18.449613 | orchestrator | Thursday 16 January 2025 15:13:51 +0000 (0:00:00.084) 0:01:10.225 ****** 2025-01-16 15:14:18.449628 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:18.449643 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:18.449658 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:18.449672 | orchestrator | 2025-01-16 15:14:18.449685 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-01-16 15:14:18.449698 | orchestrator | Thursday 16 January 2025 15:14:04 +0000 (0:00:12.823) 0:01:23.049 ****** 2025-01-16 15:14:18.449712 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:18.449725 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:18.449738 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:18.449752 | orchestrator | 2025-01-16 15:14:18.449766 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:14:18.449780 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-01-16 15:14:18.449796 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 15:14:18.449809 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-01-16 15:14:18.449823 | orchestrator | 2025-01-16 15:14:18.449836 | orchestrator | 2025-01-16 15:14:18.449861 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:14:18.449885 | orchestrator | Thursday 16 January 2025 15:14:15 +0000 (0:00:10.954) 0:01:34.004 ****** 2025-01-16 15:14:21.490163 | orchestrator | =============================================================================== 2025-01-16 15:14:21.490288 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.82s 2025-01-16 15:14:21.490308 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.95s 2025-01-16 15:14:21.490322 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 10.78s 2025-01-16 15:14:21.490336 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 10.44s 2025-01-16 15:14:21.490349 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 4.78s 2025-01-16 15:14:21.490363 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.43s 2025-01-16 15:14:21.490377 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.04s 2025-01-16 15:14:21.490390 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 2.81s 2025-01-16 15:14:21.490403 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.73s 2025-01-16 15:14:21.490416 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 2.70s 2025-01-16 15:14:21.490429 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 2.57s 2025-01-16 15:14:21.490442 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 2.51s 2025-01-16 15:14:21.490456 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.44s 2025-01-16 15:14:21.490469 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.43s 2025-01-16 15:14:21.490482 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.35s 2025-01-16 15:14:21.490523 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.35s 2025-01-16 15:14:21.490536 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.14s 2025-01-16 15:14:21.490549 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 1.75s 2025-01-16 15:14:21.490564 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 1.74s 2025-01-16 15:14:21.490602 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.73s 2025-01-16 15:14:21.490613 | orchestrator | 2025-01-16 15:14:18 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state STARTED 2025-01-16 15:14:21.490622 | orchestrator | 2025-01-16 15:14:18 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:21.490630 | orchestrator | 2025-01-16 15:14:18 | INFO  | Task 422a6db5-b8a0-439e-857d-932aeef06a2c is in state STARTED 2025-01-16 15:14:21.490639 | orchestrator | 2025-01-16 15:14:18 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:21.490675 | orchestrator | 2025-01-16 15:14:21 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:21.496256 | orchestrator | 2025-01-16 15:14:21 | INFO  | Task 78022247-2245-42a5-a6ed-0f31fe01415d is in state SUCCESS 2025-01-16 15:14:21.498217 | orchestrator | 2025-01-16 15:14:21.498285 | orchestrator | 2025-01-16 15:14:21.498299 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:14:21.498313 | orchestrator | 2025-01-16 15:14:21.498324 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:14:21.498336 | orchestrator | Thursday 16 January 2025 15:10:10 +0000 (0:00:00.362) 0:00:00.362 ****** 2025-01-16 15:14:21.498348 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:14:21.498362 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:14:21.498374 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:14:21.498385 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:14:21.498396 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:14:21.498406 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:14:21.498418 | orchestrator | 2025-01-16 15:14:21.498429 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:14:21.498442 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.672) 0:00:01.034 ****** 2025-01-16 15:14:21.498453 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-01-16 15:14:21.498465 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-01-16 15:14:21.498919 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-01-16 15:14:21.498955 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-01-16 15:14:21.499042 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-01-16 15:14:21.499054 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-01-16 15:14:21.499066 | orchestrator | 2025-01-16 15:14:21.499077 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-01-16 15:14:21.499088 | orchestrator | 2025-01-16 15:14:21.499100 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-01-16 15:14:21.499111 | orchestrator | Thursday 16 January 2025 15:10:11 +0000 (0:00:00.582) 0:00:01.617 ****** 2025-01-16 15:14:21.499150 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:14:21.499166 | orchestrator | 2025-01-16 15:14:21.499176 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-01-16 15:14:21.499203 | orchestrator | Thursday 16 January 2025 15:10:12 +0000 (0:00:00.862) 0:00:02.479 ****** 2025-01-16 15:14:21.499855 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:14:21.499942 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:14:21.499959 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:14:21.499971 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:14:21.499982 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:14:21.499993 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:14:21.500004 | orchestrator | 2025-01-16 15:14:21.500016 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-01-16 15:14:21.500027 | orchestrator | Thursday 16 January 2025 15:10:13 +0000 (0:00:00.804) 0:00:03.283 ****** 2025-01-16 15:14:21.500039 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:14:21.500072 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:14:21.500083 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:14:21.500094 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:14:21.500105 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:14:21.500115 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:14:21.500126 | orchestrator | 2025-01-16 15:14:21.500137 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-01-16 15:14:21.500148 | orchestrator | Thursday 16 January 2025 15:10:14 +0000 (0:00:00.868) 0:00:04.152 ****** 2025-01-16 15:14:21.500158 | orchestrator | ok: [testbed-node-0] => { 2025-01-16 15:14:21.500171 | orchestrator |  "changed": false, 2025-01-16 15:14:21.500182 | orchestrator |  "msg": "All assertions passed" 2025-01-16 15:14:21.500193 | orchestrator | } 2025-01-16 15:14:21.500203 | orchestrator | ok: [testbed-node-1] => { 2025-01-16 15:14:21.500215 | orchestrator |  "changed": false, 2025-01-16 15:14:21.500749 | orchestrator |  "msg": "All assertions passed" 2025-01-16 15:14:21.500772 | orchestrator | } 2025-01-16 15:14:21.500784 | orchestrator | ok: [testbed-node-2] => { 2025-01-16 15:14:21.500795 | orchestrator |  "changed": false, 2025-01-16 15:14:21.500806 | orchestrator |  "msg": "All assertions passed" 2025-01-16 15:14:21.501217 | orchestrator | } 2025-01-16 15:14:21.501251 | orchestrator | ok: [testbed-node-3] => { 2025-01-16 15:14:21.501263 | orchestrator |  "changed": false, 2025-01-16 15:14:21.501275 | orchestrator |  "msg": "All assertions passed" 2025-01-16 15:14:21.501286 | orchestrator | } 2025-01-16 15:14:21.501297 | orchestrator | ok: [testbed-node-4] => { 2025-01-16 15:14:21.501309 | orchestrator |  "changed": false, 2025-01-16 15:14:21.501319 | orchestrator |  "msg": "All assertions passed" 2025-01-16 15:14:21.501330 | orchestrator | } 2025-01-16 15:14:21.501342 | orchestrator | ok: [testbed-node-5] => { 2025-01-16 15:14:21.501708 | orchestrator |  "changed": false, 2025-01-16 15:14:21.501726 | orchestrator |  "msg": "All assertions passed" 2025-01-16 15:14:21.501737 | orchestrator | } 2025-01-16 15:14:21.501750 | orchestrator | 2025-01-16 15:14:21.501763 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-01-16 15:14:21.502342 | orchestrator | Thursday 16 January 2025 15:10:14 +0000 (0:00:00.446) 0:00:04.599 ****** 2025-01-16 15:14:21.502520 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.502537 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.502561 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.502570 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.502579 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.502588 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.502598 | orchestrator | 2025-01-16 15:14:21.502608 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-01-16 15:14:21.502618 | orchestrator | Thursday 16 January 2025 15:10:15 +0000 (0:00:00.524) 0:00:05.123 ****** 2025-01-16 15:14:21.502627 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-01-16 15:14:21.502638 | orchestrator | 2025-01-16 15:14:21.502648 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-01-16 15:14:21.502658 | orchestrator | Thursday 16 January 2025 15:10:17 +0000 (0:00:02.287) 0:00:07.411 ****** 2025-01-16 15:14:21.502667 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-01-16 15:14:21.502677 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-01-16 15:14:21.502686 | orchestrator | 2025-01-16 15:14:21.503211 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-01-16 15:14:21.503250 | orchestrator | Thursday 16 January 2025 15:10:22 +0000 (0:00:04.337) 0:00:11.749 ****** 2025-01-16 15:14:21.503578 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:14:21.503833 | orchestrator | 2025-01-16 15:14:21.503845 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-01-16 15:14:21.503856 | orchestrator | Thursday 16 January 2025 15:10:24 +0000 (0:00:02.377) 0:00:14.126 ****** 2025-01-16 15:14:21.503885 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:14:21.503958 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-01-16 15:14:21.504019 | orchestrator | 2025-01-16 15:14:21.504030 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-01-16 15:14:21.504039 | orchestrator | Thursday 16 January 2025 15:10:27 +0000 (0:00:02.681) 0:00:16.808 ****** 2025-01-16 15:14:21.504057 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:14:21.504067 | orchestrator | 2025-01-16 15:14:21.504077 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-01-16 15:14:21.504087 | orchestrator | Thursday 16 January 2025 15:10:29 +0000 (0:00:02.319) 0:00:19.128 ****** 2025-01-16 15:14:21.504097 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-01-16 15:14:21.504106 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-01-16 15:14:21.504140 | orchestrator | 2025-01-16 15:14:21.504151 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-01-16 15:14:21.504161 | orchestrator | Thursday 16 January 2025 15:10:35 +0000 (0:00:05.670) 0:00:24.799 ****** 2025-01-16 15:14:21.504170 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.504178 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.504187 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.504197 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.504206 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.504256 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.504471 | orchestrator | 2025-01-16 15:14:21.504521 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-01-16 15:14:21.504532 | orchestrator | Thursday 16 January 2025 15:10:35 +0000 (0:00:00.470) 0:00:25.269 ****** 2025-01-16 15:14:21.504542 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.504551 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.504721 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.504739 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.504750 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.504760 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.504770 | orchestrator | 2025-01-16 15:14:21.504781 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-01-16 15:14:21.504862 | orchestrator | Thursday 16 January 2025 15:10:39 +0000 (0:00:03.794) 0:00:29.064 ****** 2025-01-16 15:14:21.505093 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:14:21.505110 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:14:21.505120 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:14:21.505131 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:14:21.505141 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:14:21.505152 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:14:21.505162 | orchestrator | 2025-01-16 15:14:21.505173 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-01-16 15:14:21.505184 | orchestrator | Thursday 16 January 2025 15:10:40 +0000 (0:00:01.121) 0:00:30.186 ****** 2025-01-16 15:14:21.505195 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.505244 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.505256 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.505267 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.505277 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.505287 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.505297 | orchestrator | 2025-01-16 15:14:21.505308 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-01-16 15:14:21.505318 | orchestrator | Thursday 16 January 2025 15:10:43 +0000 (0:00:03.358) 0:00:33.544 ****** 2025-01-16 15:14:21.505332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.506181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.506323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.506362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.506383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.506404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.506474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.506505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.506590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.506603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.506614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.506648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.506683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.506693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.506708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.506717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.506754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.506798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.506834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.506844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.506854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.506885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.507076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.507135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.507157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.507163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.507180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.507186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.507205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.507228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.507259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.507277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.507283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.507306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.507344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.507353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.507391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.507425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.507438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.507448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.507478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.507506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.507553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.507596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.507606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.507624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507636 | orchestrator | 2025-01-16 15:14:21.507643 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-01-16 15:14:21.507649 | orchestrator | Thursday 16 January 2025 15:10:47 +0000 (0:00:03.252) 0:00:36.797 ****** 2025-01-16 15:14:21.507655 | orchestrator | [WARNING]: Skipped 2025-01-16 15:14:21.507662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-01-16 15:14:21.507669 | orchestrator | due to this access issue: 2025-01-16 15:14:21.507679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-01-16 15:14:21.507686 | orchestrator | a directory 2025-01-16 15:14:21.507692 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:14:21.507698 | orchestrator | 2025-01-16 15:14:21.507703 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-01-16 15:14:21.507709 | orchestrator | Thursday 16 January 2025 15:10:47 +0000 (0:00:00.486) 0:00:37.283 ****** 2025-01-16 15:14:21.507716 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:14:21.507722 | orchestrator | 2025-01-16 15:14:21.507728 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-01-16 15:14:21.507734 | orchestrator | Thursday 16 January 2025 15:10:48 +0000 (0:00:01.229) 0:00:38.513 ****** 2025-01-16 15:14:21.507746 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.507766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.507772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.507785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.507810 | orchestrator | 2025-01-16 15:14:21.507816 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-01-16 15:14:21.507822 | orchestrator | Thursday 16 January 2025 15:10:52 +0000 (0:00:04.047) 0:00:42.561 ****** 2025-01-16 15:14:21.507829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.507838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507844 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.507854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.507860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507866 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.507872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.507878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507888 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.507894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507900 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.507909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507915 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.507921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.507927 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.507933 | orchestrator | 2025-01-16 15:14:21.507939 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-01-16 15:14:21.507945 | orchestrator | Thursday 16 January 2025 15:10:55 +0000 (0:00:02.847) 0:00:45.408 ****** 2025-01-16 15:14:21.507951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.507957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507967 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.507973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.507982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.507988 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.507994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.508000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508006 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.508012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.508021 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.508028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.508035 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.508050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.508060 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.508071 | orchestrator | 2025-01-16 15:14:21.508081 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-01-16 15:14:21.508091 | orchestrator | Thursday 16 January 2025 15:10:59 +0000 (0:00:03.345) 0:00:48.753 ****** 2025-01-16 15:14:21.508099 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.508105 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.508111 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.508117 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.508123 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.508128 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.508134 | orchestrator | 2025-01-16 15:14:21.508140 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-01-16 15:14:21.508146 | orchestrator | Thursday 16 January 2025 15:11:02 +0000 (0:00:03.158) 0:00:51.911 ****** 2025-01-16 15:14:21.508152 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.508158 | orchestrator | 2025-01-16 15:14:21.508164 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-01-16 15:14:21.508170 | orchestrator | Thursday 16 January 2025 15:11:02 +0000 (0:00:00.166) 0:00:52.078 ****** 2025-01-16 15:14:21.508176 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.508182 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.508191 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.508197 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.508203 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.508210 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.508219 | orchestrator | 2025-01-16 15:14:21.508229 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-01-16 15:14:21.508243 | orchestrator | Thursday 16 January 2025 15:11:03 +0000 (0:00:00.970) 0:00:53.048 ****** 2025-01-16 15:14:21.508253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.508263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.508310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.508331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.508340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.508364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.508394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.508403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.508424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.508431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508441 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.508447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.508455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.508546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.508563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.508569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.508585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.508601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.508608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.508621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.508633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508639 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.508645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.508655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.508684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.508705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.508712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.508724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.508744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.508750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.508763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.508770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.508776 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.508784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_gr2025-01-16 15:14:21 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:21.510118 | orchestrator | 2025-01-16 15:14:21 | INFO  | Task 422a6db5-b8a0-439e-857d-932aeef06a2c is in state STARTED 2025-01-16 15:14:21.510156 | orchestrator | 2025-01-16 15:14:21 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:21.510163 | orchestrator | 2025-01-16 15:14:21 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:21.510178 | orchestrator | oups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.510186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.510227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.510270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.510299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.510318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.510325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.510345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510357 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.510363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.510379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.510421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.510444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.510479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.510502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510518 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.510528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.510542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.510592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.510646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.510669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.510693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.510699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510705 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.510715 | orchestrator | 2025-01-16 15:14:21.510721 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-01-16 15:14:21.510729 | orchestrator | Thursday 16 January 2025 15:11:07 +0000 (0:00:04.587) 0:00:57.636 ****** 2025-01-16 15:14:21.510736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.510742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.510787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.510840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.510890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.510929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.510936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.510953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.510975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.511001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.511021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.511075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.511122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.511136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.511207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.511298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.511315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.511327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.511344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.511372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.511378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.511396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.511405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.511415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.511427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.511433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.511465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511471 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.511482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.511531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.511563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.511590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.511610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.511619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.511638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.511714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.511741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.511750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.511767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.511776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.511785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.511820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.511962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.511977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.511984 | orchestrator | 2025-01-16 15:14:21.511990 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-01-16 15:14:21.511996 | orchestrator | Thursday 16 January 2025 15:11:13 +0000 (0:00:05.057) 0:01:02.693 ****** 2025-01-16 15:14:21.512003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.512016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.512061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.512107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.512146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.512180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.512215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.512259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.512298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.512336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.512352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.512376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.512458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.512531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.512552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.512586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.512632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.512700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.512775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.512802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.512827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.512876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.512887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.512918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.512953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.512963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.512973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.512992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.513009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513020 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.513035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.513055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.513066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.513101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.513116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.513140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.513164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.513179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.513343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.513354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.513363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.513462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.513480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.513543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513550 | orchestrator | 2025-01-16 15:14:21.513556 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-01-16 15:14:21.513563 | orchestrator | Thursday 16 January 2025 15:11:26 +0000 (0:00:13.669) 0:01:16.363 ****** 2025-01-16 15:14:21.513570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.513577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.513668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.513722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.513739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.513746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.513810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.513837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.513843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.513850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.513905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.513921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.513934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.513951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.513989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.513998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.514005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514052 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.514059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.514109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.514122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514132 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.514141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.514151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.514255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.514336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.514352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.514381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.514421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514430 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.514436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.514443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.514531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.514555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.514642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.514743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.514779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.514798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.514883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.514920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.514933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.514997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.515012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.515031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.515058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.515124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.515188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.515265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.515281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.515318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.515349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.515405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.515447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.515457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515466 | orchestrator | 2025-01-16 15:14:21.515475 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-01-16 15:14:21.515536 | orchestrator | Thursday 16 January 2025 15:11:29 +0000 (0:00:02.582) 0:01:18.946 ****** 2025-01-16 15:14:21.515549 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.515558 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:21.515568 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:21.515578 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.515587 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.515596 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:21.515604 | orchestrator | 2025-01-16 15:14:21.515614 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-01-16 15:14:21.515623 | orchestrator | Thursday 16 January 2025 15:11:33 +0000 (0:00:04.691) 0:01:23.637 ****** 2025-01-16 15:14:21.515633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.515718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.515796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.515873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.515900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.515934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.515945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.515955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.516066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.516078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516088 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.516099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.516109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.516218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.516315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.516328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.516419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.516430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516440 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.516450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.516460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.516568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.516650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.516669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.516727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.516736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516743 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.516755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.516762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.516824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.516900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.516918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.516938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.516949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.516965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.517024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.517060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.517159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.517199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.517210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.517235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.517290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.517306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.517324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.517330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.517385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.517449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.517465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.517479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.517512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.517560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.517577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.517595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.517601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.517607 | orchestrator | 2025-01-16 15:14:21.517614 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-01-16 15:14:21.517620 | orchestrator | Thursday 16 January 2025 15:11:36 +0000 (0:00:02.952) 0:01:26.589 ****** 2025-01-16 15:14:21.517626 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.517633 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.517639 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.517645 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.517651 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.517657 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.517663 | orchestrator | 2025-01-16 15:14:21.517669 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-01-16 15:14:21.517676 | orchestrator | Thursday 16 January 2025 15:11:38 +0000 (0:00:01.548) 0:01:28.137 ****** 2025-01-16 15:14:21.517682 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.517688 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.517694 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.517700 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.517706 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.517712 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.517718 | orchestrator | 2025-01-16 15:14:21.517724 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-01-16 15:14:21.517730 | orchestrator | Thursday 16 January 2025 15:11:40 +0000 (0:00:01.570) 0:01:29.708 ****** 2025-01-16 15:14:21.517736 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.517742 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.517781 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.517792 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.517803 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.517813 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.517824 | orchestrator | 2025-01-16 15:14:21.517834 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-01-16 15:14:21.517843 | orchestrator | Thursday 16 January 2025 15:11:42 +0000 (0:00:02.200) 0:01:31.908 ****** 2025-01-16 15:14:21.517853 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.517863 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.517873 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.517883 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.517892 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.517903 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.517920 | orchestrator | 2025-01-16 15:14:21.517931 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-01-16 15:14:21.517941 | orchestrator | Thursday 16 January 2025 15:11:44 +0000 (0:00:02.323) 0:01:34.231 ****** 2025-01-16 15:14:21.517952 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.517959 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.517965 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.517970 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.517976 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.517987 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.517993 | orchestrator | 2025-01-16 15:14:21.517999 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-01-16 15:14:21.518005 | orchestrator | Thursday 16 January 2025 15:11:48 +0000 (0:00:03.536) 0:01:37.768 ****** 2025-01-16 15:14:21.518011 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.518040 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.518046 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.518053 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.518059 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.518065 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.518071 | orchestrator | 2025-01-16 15:14:21.518077 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-01-16 15:14:21.518083 | orchestrator | Thursday 16 January 2025 15:11:49 +0000 (0:00:01.667) 0:01:39.435 ****** 2025-01-16 15:14:21.518090 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-01-16 15:14:21.518097 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.518103 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-01-16 15:14:21.518109 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.518115 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-01-16 15:14:21.518121 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.518127 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-01-16 15:14:21.518133 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.518139 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-01-16 15:14:21.518145 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.518151 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-01-16 15:14:21.518157 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.518163 | orchestrator | 2025-01-16 15:14:21.518169 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-01-16 15:14:21.518175 | orchestrator | Thursday 16 January 2025 15:11:52 +0000 (0:00:02.445) 0:01:41.881 ****** 2025-01-16 15:14:21.518182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.518249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.518285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.518301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.518354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.518371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.518384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.518395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.518437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.518443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518450 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.518457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.518463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.518577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.518590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.518601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.518649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.518663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.518669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.518726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.518735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518742 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.518751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.518762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.518879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.518900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.518916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.518984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.518993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.518999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.519031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.519069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519077 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.519084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.519090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.519170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.519212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.519262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.519295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.519301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519307 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.519345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.519353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.519371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.519441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.519553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.519559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.519614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.519680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.519731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519740 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.519747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.519779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.519792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.519832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.519859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.519866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519872 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.519878 | orchestrator | 2025-01-16 15:14:21.519884 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-01-16 15:14:21.519891 | orchestrator | Thursday 16 January 2025 15:11:54 +0000 (0:00:02.334) 0:01:44.216 ****** 2025-01-16 15:14:21.519897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.519939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.519975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.519985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.520031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.520045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.520057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.520076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.520083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.520137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.520143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520150 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.520163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.520170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.520233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.520254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.520296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.520316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.520336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.520343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.520421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.520431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520441 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.520460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.520475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.520587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.520610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.520616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.520670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.520691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.520713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.520785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.520796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520802 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.520809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.520815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.520891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.520921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.520939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.521017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.521060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.521072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.521080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.521129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.521154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.521183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.521232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.521242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.521277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.521284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.521291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.521343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.521372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.521379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521387 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.521393 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.521421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.521429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.521474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.521509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.521522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.521542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.521571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.521584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.521603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.521609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.521616 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.521623 | orchestrator | 2025-01-16 15:14:21.521630 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-01-16 15:14:21.521637 | orchestrator | Thursday 16 January 2025 15:11:57 +0000 (0:00:02.932) 0:01:47.149 ****** 2025-01-16 15:14:21.521644 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.521651 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.521658 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.521665 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.521672 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.521678 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.521685 | orchestrator | 2025-01-16 15:14:21.521692 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-01-16 15:14:21.521699 | orchestrator | Thursday 16 January 2025 15:11:59 +0000 (0:00:01.739) 0:01:48.888 ****** 2025-01-16 15:14:21.521706 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.521712 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.521719 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.521726 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:14:21.521732 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:14:21.521739 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:14:21.521746 | orchestrator | 2025-01-16 15:14:21.521752 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-01-16 15:14:21.521759 | orchestrator | Thursday 16 January 2025 15:12:05 +0000 (0:00:06.176) 0:01:55.065 ****** 2025-01-16 15:14:21.521766 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.521773 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.521794 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.521801 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.521808 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.521814 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.521821 | orchestrator | 2025-01-16 15:14:21.521827 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-01-16 15:14:21.521834 | orchestrator | Thursday 16 January 2025 15:12:07 +0000 (0:00:01.726) 0:01:56.792 ****** 2025-01-16 15:14:21.521841 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.521847 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.521854 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.521865 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.521872 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.521879 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.521885 | orchestrator | 2025-01-16 15:14:21.521892 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-01-16 15:14:21.521899 | orchestrator | Thursday 16 January 2025 15:12:09 +0000 (0:00:02.018) 0:01:58.811 ****** 2025-01-16 15:14:21.521905 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:21.521912 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:21.521919 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:21.521927 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.521934 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.521941 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.521947 | orchestrator | 2025-01-16 15:14:21.521954 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-01-16 15:14:21.521961 | orchestrator | Thursday 16 January 2025 15:12:13 +0000 (0:00:04.732) 0:02:03.543 ****** 2025-01-16 15:14:21.521968 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.521975 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.521983 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.521991 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.521998 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.522005 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.522012 | orchestrator | 2025-01-16 15:14:21.522041 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-01-16 15:14:21.522049 | orchestrator | Thursday 16 January 2025 15:12:17 +0000 (0:00:03.230) 0:02:06.773 ****** 2025-01-16 15:14:21.522056 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.522064 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.522071 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.522078 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.522086 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.522093 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.522100 | orchestrator | 2025-01-16 15:14:21.522107 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-01-16 15:14:21.522115 | orchestrator | Thursday 16 January 2025 15:12:20 +0000 (0:00:02.904) 0:02:09.678 ****** 2025-01-16 15:14:21.522122 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.522129 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.522136 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.522147 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.522157 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.522168 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.522178 | orchestrator | 2025-01-16 15:14:21.522188 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-01-16 15:14:21.522198 | orchestrator | Thursday 16 January 2025 15:12:25 +0000 (0:00:05.087) 0:02:14.766 ****** 2025-01-16 15:14:21.522209 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.522220 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.522231 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.522242 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.522254 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.522264 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.522276 | orchestrator | 2025-01-16 15:14:21.522291 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-01-16 15:14:21.522299 | orchestrator | Thursday 16 January 2025 15:12:28 +0000 (0:00:03.735) 0:02:18.501 ****** 2025-01-16 15:14:21.522306 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.522313 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.522320 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.522326 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.522333 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.522339 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.522351 | orchestrator | 2025-01-16 15:14:21.522358 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-01-16 15:14:21.522365 | orchestrator | Thursday 16 January 2025 15:12:32 +0000 (0:00:03.192) 0:02:21.693 ****** 2025-01-16 15:14:21.522372 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-01-16 15:14:21.522379 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.522389 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-01-16 15:14:21.522395 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.522402 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-01-16 15:14:21.522409 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.522415 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-01-16 15:14:21.522422 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.522428 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-01-16 15:14:21.522435 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.522442 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-01-16 15:14:21.522448 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.522455 | orchestrator | 2025-01-16 15:14:21.522462 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-01-16 15:14:21.522469 | orchestrator | Thursday 16 January 2025 15:12:34 +0000 (0:00:02.864) 0:02:24.558 ****** 2025-01-16 15:14:21.522538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.522549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.522610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.522625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.522637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.522675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.522721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.522731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.522758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.522765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522772 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.522794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.522802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.522839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.522868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.522875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.522899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.522914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.522933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.522959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.522966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.522973 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.522980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.522999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.523039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.523100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.523114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.523156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.523168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523175 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.523182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.523189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.523238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.523322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.523344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.523387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.523398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523405 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.523412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.523419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.523471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.523554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.523568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.523610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.523625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523632 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.523638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.523645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.523696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.523749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.523771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.523792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.523817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523826 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.523833 | orchestrator | 2025-01-16 15:14:21.523841 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-01-16 15:14:21.523849 | orchestrator | Thursday 16 January 2025 15:12:37 +0000 (0:00:02.725) 0:02:27.283 ****** 2025-01-16 15:14:21.523863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.523872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.523932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.523956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.523964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.524003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.524020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.524066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.524129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.524153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.524194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.524224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.524244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.524270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.524306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-01-16 15:14:21.524360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.524368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.524407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.524461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.524478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.524542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.524609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.524625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-01-16 15:14:21.524651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.524682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-01-16 15:14:21.524707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.524716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.524728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.524751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.524759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.524771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.524833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.524853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.524883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.524893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}}) 2025-01-16 15:14:21.524912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.524925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.524932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-01-16 15:14:21.524950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:14:21.524969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:14:21.524976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.524990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-01-16 15:14:21.524997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-01-16 15:14:21.525008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-01-16 15:14:21.525019 | orchestrator | 2025-01-16 15:14:21.525027 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-01-16 15:14:21.525034 | orchestrator | Thursday 16 January 2025 15:12:41 +0000 (0:00:03.508) 0:02:30.792 ****** 2025-01-16 15:14:21.525041 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:14:21.525048 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:14:21.525055 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:14:21.525062 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:14:21.525069 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:14:21.525079 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:14:21.525086 | orchestrator | 2025-01-16 15:14:21.525093 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-01-16 15:14:21.525101 | orchestrator | Thursday 16 January 2025 15:12:41 +0000 (0:00:00.570) 0:02:31.363 ****** 2025-01-16 15:14:21.525107 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:21.525115 | orchestrator | 2025-01-16 15:14:21.525122 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-01-16 15:14:21.525129 | orchestrator | Thursday 16 January 2025 15:12:43 +0000 (0:00:02.070) 0:02:33.433 ****** 2025-01-16 15:14:21.525136 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:21.525143 | orchestrator | 2025-01-16 15:14:21.525150 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-01-16 15:14:21.525157 | orchestrator | Thursday 16 January 2025 15:12:45 +0000 (0:00:01.832) 0:02:35.265 ****** 2025-01-16 15:14:21.525164 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:21.525171 | orchestrator | 2025-01-16 15:14:21.525178 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-01-16 15:14:21.525185 | orchestrator | Thursday 16 January 2025 15:13:14 +0000 (0:00:29.329) 0:03:04.595 ****** 2025-01-16 15:14:21.525192 | orchestrator | 2025-01-16 15:14:21.525199 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-01-16 15:14:21.525206 | orchestrator | Thursday 16 January 2025 15:13:15 +0000 (0:00:00.048) 0:03:04.643 ****** 2025-01-16 15:14:21.525213 | orchestrator | 2025-01-16 15:14:21.525220 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-01-16 15:14:21.525227 | orchestrator | Thursday 16 January 2025 15:13:15 +0000 (0:00:00.154) 0:03:04.797 ****** 2025-01-16 15:14:21.525235 | orchestrator | 2025-01-16 15:14:21.525242 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-01-16 15:14:21.525249 | orchestrator | Thursday 16 January 2025 15:13:15 +0000 (0:00:00.041) 0:03:04.839 ****** 2025-01-16 15:14:21.525256 | orchestrator | 2025-01-16 15:14:21.525263 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-01-16 15:14:21.525270 | orchestrator | Thursday 16 January 2025 15:13:15 +0000 (0:00:00.037) 0:03:04.877 ****** 2025-01-16 15:14:21.525277 | orchestrator | 2025-01-16 15:14:21.525284 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-01-16 15:14:21.525291 | orchestrator | Thursday 16 January 2025 15:13:15 +0000 (0:00:00.044) 0:03:04.921 ****** 2025-01-16 15:14:21.525298 | orchestrator | 2025-01-16 15:14:21.525305 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-01-16 15:14:21.525312 | orchestrator | Thursday 16 January 2025 15:13:15 +0000 (0:00:00.251) 0:03:05.173 ****** 2025-01-16 15:14:21.525319 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:21.525326 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:21.525333 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:21.525340 | orchestrator | 2025-01-16 15:14:21.525347 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-01-16 15:14:21.525354 | orchestrator | Thursday 16 January 2025 15:13:38 +0000 (0:00:23.414) 0:03:28.587 ****** 2025-01-16 15:14:21.525361 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:14:21.525368 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:14:21.525375 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:14:21.525382 | orchestrator | 2025-01-16 15:14:21.525399 | orchestrator | RUNNING HANDLER [neutron : Restart ironic-neutron-agent container] ************* 2025-01-16 15:14:21.525407 | orchestrator | Thursday 16 January 2025 15:14:11 +0000 (0:00:32.173) 0:04:00.761 ****** 2025-01-16 15:14:21.525414 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:14:21.525421 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:14:21.525428 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:14:21.525435 | orchestrator | 2025-01-16 15:14:21.525442 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:14:21.525449 | orchestrator | testbed-node-0 : ok=29  changed=18  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-01-16 15:14:21.525456 | orchestrator | testbed-node-1 : ok=19  changed=11  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-01-16 15:14:21.525464 | orchestrator | testbed-node-2 : ok=19  changed=11  unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-01-16 15:14:21.525471 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-01-16 15:14:21.525478 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-01-16 15:14:21.525507 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-01-16 15:14:24.527635 | orchestrator | 2025-01-16 15:14:24.527758 | orchestrator | 2025-01-16 15:14:24.527776 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:14:24.527789 | orchestrator | Thursday 16 January 2025 15:14:19 +0000 (0:00:08.768) 0:04:09.529 ****** 2025-01-16 15:14:24.527801 | orchestrator | =============================================================================== 2025-01-16 15:14:24.527812 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 32.17s 2025-01-16 15:14:24.527824 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 29.33s 2025-01-16 15:14:24.527836 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.41s 2025-01-16 15:14:24.527847 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 13.67s 2025-01-16 15:14:24.527858 | orchestrator | neutron : Restart ironic-neutron-agent container ------------------------ 8.77s 2025-01-16 15:14:24.527871 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.18s 2025-01-16 15:14:24.527890 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 5.67s 2025-01-16 15:14:24.527908 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 5.09s 2025-01-16 15:14:24.527926 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.06s 2025-01-16 15:14:24.527943 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.73s 2025-01-16 15:14:24.528036 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.69s 2025-01-16 15:14:24.528061 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.59s 2025-01-16 15:14:24.528072 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 4.34s 2025-01-16 15:14:24.528084 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.05s 2025-01-16 15:14:24.528095 | orchestrator | Load and persist kernel modules ----------------------------------------- 3.79s 2025-01-16 15:14:24.528106 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.74s 2025-01-16 15:14:24.528118 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 3.54s 2025-01-16 15:14:24.528129 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.51s 2025-01-16 15:14:24.528140 | orchestrator | Setting sysctl values --------------------------------------------------- 3.36s 2025-01-16 15:14:24.528180 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.35s 2025-01-16 15:14:24.528209 | orchestrator | 2025-01-16 15:14:24 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:24.528686 | orchestrator | 2025-01-16 15:14:24 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:24.528709 | orchestrator | 2025-01-16 15:14:24 | INFO  | Task 422a6db5-b8a0-439e-857d-932aeef06a2c is in state STARTED 2025-01-16 15:14:24.528727 | orchestrator | 2025-01-16 15:14:24 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:27.557316 | orchestrator | 2025-01-16 15:14:24 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:27.557586 | orchestrator | 2025-01-16 15:14:27 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:27.557796 | orchestrator | 2025-01-16 15:14:27 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:27.557886 | orchestrator | 2025-01-16 15:14:27 | INFO  | Task 422a6db5-b8a0-439e-857d-932aeef06a2c is in state STARTED 2025-01-16 15:14:27.557911 | orchestrator | 2025-01-16 15:14:27 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:30.583818 | orchestrator | 2025-01-16 15:14:27 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:30.583969 | orchestrator | 2025-01-16 15:14:30 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:30.584452 | orchestrator | 2025-01-16 15:14:30 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:30.584787 | orchestrator | 2025-01-16 15:14:30 | INFO  | Task 422a6db5-b8a0-439e-857d-932aeef06a2c is in state STARTED 2025-01-16 15:14:33.605550 | orchestrator | 2025-01-16 15:14:30 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:33.605737 | orchestrator | 2025-01-16 15:14:30 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:33.605762 | orchestrator | 2025-01-16 15:14:33 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:33.606151 | orchestrator | 2025-01-16 15:14:33 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:33.606164 | orchestrator | 2025-01-16 15:14:33 | INFO  | Task 422a6db5-b8a0-439e-857d-932aeef06a2c is in state STARTED 2025-01-16 15:14:33.607614 | orchestrator | 2025-01-16 15:14:33 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:33.608649 | orchestrator | 2025-01-16 15:14:33 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:36.632255 | orchestrator | 2025-01-16 15:14:36 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:36.632448 | orchestrator | 2025-01-16 15:14:36 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:36.632462 | orchestrator | 2025-01-16 15:14:36 | INFO  | Task 422a6db5-b8a0-439e-857d-932aeef06a2c is in state STARTED 2025-01-16 15:14:36.632477 | orchestrator | 2025-01-16 15:14:36 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:39.659031 | orchestrator | 2025-01-16 15:14:36 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:39.659131 | orchestrator | 2025-01-16 15:14:39 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:39.659330 | orchestrator | 2025-01-16 15:14:39 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:39.659379 | orchestrator | 2025-01-16 15:14:39 | INFO  | Task 422a6db5-b8a0-439e-857d-932aeef06a2c is in state STARTED 2025-01-16 15:14:39.659846 | orchestrator | 2025-01-16 15:14:39 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:42.683065 | orchestrator | 2025-01-16 15:14:39 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:42.683203 | orchestrator | 2025-01-16 15:14:42 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:42.683374 | orchestrator | 2025-01-16 15:14:42 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:42.683390 | orchestrator | 2025-01-16 15:14:42 | INFO  | Task 422a6db5-b8a0-439e-857d-932aeef06a2c is in state SUCCESS 2025-01-16 15:14:42.683402 | orchestrator | 2025-01-16 15:14:42 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:42.683765 | orchestrator | 2025-01-16 15:14:42 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:45.708554 | orchestrator | 2025-01-16 15:14:45 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:45.709231 | orchestrator | 2025-01-16 15:14:45 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:14:45.709598 | orchestrator | 2025-01-16 15:14:45 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:45.710085 | orchestrator | 2025-01-16 15:14:45 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:48.734002 | orchestrator | 2025-01-16 15:14:45 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:48.734163 | orchestrator | 2025-01-16 15:14:48 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:48.735808 | orchestrator | 2025-01-16 15:14:48 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:14:48.735992 | orchestrator | 2025-01-16 15:14:48 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:48.736015 | orchestrator | 2025-01-16 15:14:48 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:51.765826 | orchestrator | 2025-01-16 15:14:48 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:51.765973 | orchestrator | 2025-01-16 15:14:51 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:54.789999 | orchestrator | 2025-01-16 15:14:51 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:14:54.790153 | orchestrator | 2025-01-16 15:14:51 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:54.790167 | orchestrator | 2025-01-16 15:14:51 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:54.790177 | orchestrator | 2025-01-16 15:14:51 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:54.790318 | orchestrator | 2025-01-16 15:14:54 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:54.791726 | orchestrator | 2025-01-16 15:14:54 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:14:54.791773 | orchestrator | 2025-01-16 15:14:54 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:54.791791 | orchestrator | 2025-01-16 15:14:54 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:14:57.816901 | orchestrator | 2025-01-16 15:14:54 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:14:57.817024 | orchestrator | 2025-01-16 15:14:57 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:14:57.818454 | orchestrator | 2025-01-16 15:14:57 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:14:57.818521 | orchestrator | 2025-01-16 15:14:57 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:14:57.818539 | orchestrator | 2025-01-16 15:14:57 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:00.841911 | orchestrator | 2025-01-16 15:14:57 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:00.842101 | orchestrator | 2025-01-16 15:15:00 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:00.842341 | orchestrator | 2025-01-16 15:15:00 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:00.842383 | orchestrator | 2025-01-16 15:15:00 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:00.843841 | orchestrator | 2025-01-16 15:15:00 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:03.868247 | orchestrator | 2025-01-16 15:15:00 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:03.868440 | orchestrator | 2025-01-16 15:15:03 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:03.874584 | orchestrator | 2025-01-16 15:15:03 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:03.875006 | orchestrator | 2025-01-16 15:15:03 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:03.875058 | orchestrator | 2025-01-16 15:15:03 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:06.910380 | orchestrator | 2025-01-16 15:15:03 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:06.910569 | orchestrator | 2025-01-16 15:15:06 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:06.911246 | orchestrator | 2025-01-16 15:15:06 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:06.911590 | orchestrator | 2025-01-16 15:15:06 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:06.911936 | orchestrator | 2025-01-16 15:15:06 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:09.936869 | orchestrator | 2025-01-16 15:15:06 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:09.936983 | orchestrator | 2025-01-16 15:15:09 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:09.937118 | orchestrator | 2025-01-16 15:15:09 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:09.940249 | orchestrator | 2025-01-16 15:15:09 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:12.957957 | orchestrator | 2025-01-16 15:15:09 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:12.958086 | orchestrator | 2025-01-16 15:15:09 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:12.958114 | orchestrator | 2025-01-16 15:15:12 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:12.958460 | orchestrator | 2025-01-16 15:15:12 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:12.958658 | orchestrator | 2025-01-16 15:15:12 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:12.959246 | orchestrator | 2025-01-16 15:15:12 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:15.989349 | orchestrator | 2025-01-16 15:15:12 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:15.989571 | orchestrator | 2025-01-16 15:15:15 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:15.990797 | orchestrator | 2025-01-16 15:15:15 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:15.990839 | orchestrator | 2025-01-16 15:15:15 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:19.027047 | orchestrator | 2025-01-16 15:15:15 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:19.027177 | orchestrator | 2025-01-16 15:15:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:19.027215 | orchestrator | 2025-01-16 15:15:19 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:19.027711 | orchestrator | 2025-01-16 15:15:19 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:19.027754 | orchestrator | 2025-01-16 15:15:19 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:19.028188 | orchestrator | 2025-01-16 15:15:19 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:22.062946 | orchestrator | 2025-01-16 15:15:19 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:22.063059 | orchestrator | 2025-01-16 15:15:22 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:25.091311 | orchestrator | 2025-01-16 15:15:22 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:25.091541 | orchestrator | 2025-01-16 15:15:22 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:25.091563 | orchestrator | 2025-01-16 15:15:22 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:25.091575 | orchestrator | 2025-01-16 15:15:22 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:25.091602 | orchestrator | 2025-01-16 15:15:25 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:25.092262 | orchestrator | 2025-01-16 15:15:25 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:25.092296 | orchestrator | 2025-01-16 15:15:25 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:25.092812 | orchestrator | 2025-01-16 15:15:25 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:28.113728 | orchestrator | 2025-01-16 15:15:25 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:28.113882 | orchestrator | 2025-01-16 15:15:28 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:28.114797 | orchestrator | 2025-01-16 15:15:28 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:28.114879 | orchestrator | 2025-01-16 15:15:28 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:28.115238 | orchestrator | 2025-01-16 15:15:28 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:28.115405 | orchestrator | 2025-01-16 15:15:28 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:31.140071 | orchestrator | 2025-01-16 15:15:31 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:31.140190 | orchestrator | 2025-01-16 15:15:31 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:31.140742 | orchestrator | 2025-01-16 15:15:31 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:31.141158 | orchestrator | 2025-01-16 15:15:31 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:34.184056 | orchestrator | 2025-01-16 15:15:31 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:34.184306 | orchestrator | 2025-01-16 15:15:34 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:34.184869 | orchestrator | 2025-01-16 15:15:34 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:34.184908 | orchestrator | 2025-01-16 15:15:34 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:34.185522 | orchestrator | 2025-01-16 15:15:34 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:34.186888 | orchestrator | 2025-01-16 15:15:34 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:37.208956 | orchestrator | 2025-01-16 15:15:37 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:37.209275 | orchestrator | 2025-01-16 15:15:37 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:37.209325 | orchestrator | 2025-01-16 15:15:37 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:37.209340 | orchestrator | 2025-01-16 15:15:37 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:40.233788 | orchestrator | 2025-01-16 15:15:37 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:40.233956 | orchestrator | 2025-01-16 15:15:40 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:43.259090 | orchestrator | 2025-01-16 15:15:40 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:43.259315 | orchestrator | 2025-01-16 15:15:40 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:43.259337 | orchestrator | 2025-01-16 15:15:40 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:43.259353 | orchestrator | 2025-01-16 15:15:40 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:43.259384 | orchestrator | 2025-01-16 15:15:43 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:43.259721 | orchestrator | 2025-01-16 15:15:43 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:43.259750 | orchestrator | 2025-01-16 15:15:43 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:43.259769 | orchestrator | 2025-01-16 15:15:43 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:46.286432 | orchestrator | 2025-01-16 15:15:43 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:46.286599 | orchestrator | 2025-01-16 15:15:46 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:46.286731 | orchestrator | 2025-01-16 15:15:46 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:46.286740 | orchestrator | 2025-01-16 15:15:46 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:46.286751 | orchestrator | 2025-01-16 15:15:46 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:49.311336 | orchestrator | 2025-01-16 15:15:46 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:49.311550 | orchestrator | 2025-01-16 15:15:49 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:49.313023 | orchestrator | 2025-01-16 15:15:49 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:49.313085 | orchestrator | 2025-01-16 15:15:49 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:49.315470 | orchestrator | 2025-01-16 15:15:49 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:52.335870 | orchestrator | 2025-01-16 15:15:49 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:52.335984 | orchestrator | 2025-01-16 15:15:52 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:52.336109 | orchestrator | 2025-01-16 15:15:52 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:52.336124 | orchestrator | 2025-01-16 15:15:52 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:52.336136 | orchestrator | 2025-01-16 15:15:52 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:55.367812 | orchestrator | 2025-01-16 15:15:52 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:55.368043 | orchestrator | 2025-01-16 15:15:55 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:15:55.368353 | orchestrator | 2025-01-16 15:15:55 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:15:55.368384 | orchestrator | 2025-01-16 15:15:55 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:15:55.368406 | orchestrator | 2025-01-16 15:15:55 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:15:58.412676 | orchestrator | 2025-01-16 15:15:55 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:15:58.412844 | orchestrator | 2025-01-16 15:15:58 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:01.445958 | orchestrator | 2025-01-16 15:15:58 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:01.446119 | orchestrator | 2025-01-16 15:15:58 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:01.446138 | orchestrator | 2025-01-16 15:15:58 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:01.446148 | orchestrator | 2025-01-16 15:15:58 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:01.446173 | orchestrator | 2025-01-16 15:16:01 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:01.446267 | orchestrator | 2025-01-16 15:16:01 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:01.446284 | orchestrator | 2025-01-16 15:16:01 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:01.446852 | orchestrator | 2025-01-16 15:16:01 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:04.472678 | orchestrator | 2025-01-16 15:16:01 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:04.472937 | orchestrator | 2025-01-16 15:16:04 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:04.473267 | orchestrator | 2025-01-16 15:16:04 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:04.473293 | orchestrator | 2025-01-16 15:16:04 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:04.473314 | orchestrator | 2025-01-16 15:16:04 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:07.501358 | orchestrator | 2025-01-16 15:16:04 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:07.501544 | orchestrator | 2025-01-16 15:16:07 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:10.528555 | orchestrator | 2025-01-16 15:16:07 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:10.528680 | orchestrator | 2025-01-16 15:16:07 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:10.528699 | orchestrator | 2025-01-16 15:16:07 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:10.528716 | orchestrator | 2025-01-16 15:16:07 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:10.528749 | orchestrator | 2025-01-16 15:16:10 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:10.529579 | orchestrator | 2025-01-16 15:16:10 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:10.529619 | orchestrator | 2025-01-16 15:16:10 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:10.529926 | orchestrator | 2025-01-16 15:16:10 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:13.555821 | orchestrator | 2025-01-16 15:16:10 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:13.555946 | orchestrator | 2025-01-16 15:16:13 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:13.556721 | orchestrator | 2025-01-16 15:16:13 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:13.558851 | orchestrator | 2025-01-16 15:16:13 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:13.558952 | orchestrator | 2025-01-16 15:16:13 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:16.581202 | orchestrator | 2025-01-16 15:16:13 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:16.581352 | orchestrator | 2025-01-16 15:16:16 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:16.582697 | orchestrator | 2025-01-16 15:16:16 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:16.582799 | orchestrator | 2025-01-16 15:16:16 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:16.584958 | orchestrator | 2025-01-16 15:16:16 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:19.601072 | orchestrator | 2025-01-16 15:16:16 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:19.601204 | orchestrator | 2025-01-16 15:16:19 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:19.601578 | orchestrator | 2025-01-16 15:16:19 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:19.601610 | orchestrator | 2025-01-16 15:16:19 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:19.601920 | orchestrator | 2025-01-16 15:16:19 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:22.621963 | orchestrator | 2025-01-16 15:16:19 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:22.622179 | orchestrator | 2025-01-16 15:16:22 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:25.642288 | orchestrator | 2025-01-16 15:16:22 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:25.643058 | orchestrator | 2025-01-16 15:16:22 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:25.643127 | orchestrator | 2025-01-16 15:16:22 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:25.643140 | orchestrator | 2025-01-16 15:16:22 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:25.643167 | orchestrator | 2025-01-16 15:16:25 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:25.643551 | orchestrator | 2025-01-16 15:16:25 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:25.643595 | orchestrator | 2025-01-16 15:16:25 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:25.643614 | orchestrator | 2025-01-16 15:16:25 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:28.671363 | orchestrator | 2025-01-16 15:16:25 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:28.671464 | orchestrator | 2025-01-16 15:16:28 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:31.697674 | orchestrator | 2025-01-16 15:16:28 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:31.697795 | orchestrator | 2025-01-16 15:16:28 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:31.697814 | orchestrator | 2025-01-16 15:16:28 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:31.697830 | orchestrator | 2025-01-16 15:16:28 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:31.697863 | orchestrator | 2025-01-16 15:16:31 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:31.698253 | orchestrator | 2025-01-16 15:16:31 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:31.698290 | orchestrator | 2025-01-16 15:16:31 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:31.699180 | orchestrator | 2025-01-16 15:16:31 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:34.719846 | orchestrator | 2025-01-16 15:16:31 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:34.719972 | orchestrator | 2025-01-16 15:16:34 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state STARTED 2025-01-16 15:16:34.720252 | orchestrator | 2025-01-16 15:16:34 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:34.720330 | orchestrator | 2025-01-16 15:16:34 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:34.720366 | orchestrator | 2025-01-16 15:16:34 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:37.750211 | orchestrator | 2025-01-16 15:16:34 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:37.750400 | orchestrator | 2025-01-16 15:16:37.750548 | orchestrator | 2025-01-16 15:16:37.750567 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:16:37.750577 | orchestrator | 2025-01-16 15:16:37.750586 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:16:37.750596 | orchestrator | Thursday 16 January 2025 15:14:18 +0000 (0:00:00.662) 0:00:00.662 ****** 2025-01-16 15:16:37.750606 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:16:37.750617 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:16:37.750627 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:16:37.750637 | orchestrator | ok: [testbed-manager] 2025-01-16 15:16:37.750646 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:16:37.750656 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:16:37.750665 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:16:37.750675 | orchestrator | 2025-01-16 15:16:37.750708 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:16:37.750718 | orchestrator | Thursday 16 January 2025 15:14:19 +0000 (0:00:01.370) 0:00:02.032 ****** 2025-01-16 15:16:37.750727 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-01-16 15:16:37.750737 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-01-16 15:16:37.750746 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-01-16 15:16:37.750759 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-01-16 15:16:37.750774 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-01-16 15:16:37.750787 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-01-16 15:16:37.750802 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-01-16 15:16:37.750812 | orchestrator | 2025-01-16 15:16:37.751230 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-01-16 15:16:37.751254 | orchestrator | 2025-01-16 15:16:37.751277 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-01-16 15:16:37.751287 | orchestrator | Thursday 16 January 2025 15:14:20 +0000 (0:00:00.604) 0:00:02.637 ****** 2025-01-16 15:16:37.751298 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:16:37.751309 | orchestrator | 2025-01-16 15:16:37.751318 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-01-16 15:16:37.751328 | orchestrator | Thursday 16 January 2025 15:14:21 +0000 (0:00:01.091) 0:00:03.728 ****** 2025-01-16 15:16:37.751337 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-01-16 15:16:37.751347 | orchestrator | 2025-01-16 15:16:37.751356 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-01-16 15:16:37.751366 | orchestrator | Thursday 16 January 2025 15:14:23 +0000 (0:00:02.201) 0:00:05.929 ****** 2025-01-16 15:16:37.751376 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-01-16 15:16:37.751462 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-01-16 15:16:37.751493 | orchestrator | 2025-01-16 15:16:37.751504 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-01-16 15:16:37.751513 | orchestrator | Thursday 16 January 2025 15:14:27 +0000 (0:00:04.315) 0:00:10.244 ****** 2025-01-16 15:16:37.751523 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:16:37.751763 | orchestrator | 2025-01-16 15:16:37.751778 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-01-16 15:16:37.751788 | orchestrator | Thursday 16 January 2025 15:14:30 +0000 (0:00:02.418) 0:00:12.662 ****** 2025-01-16 15:16:37.751798 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:16:37.751807 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-01-16 15:16:37.751817 | orchestrator | 2025-01-16 15:16:37.751826 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-01-16 15:16:37.751835 | orchestrator | Thursday 16 January 2025 15:14:33 +0000 (0:00:02.716) 0:00:15.379 ****** 2025-01-16 15:16:37.751845 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:16:37.751854 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-01-16 15:16:37.751864 | orchestrator | 2025-01-16 15:16:37.751873 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-01-16 15:16:37.751883 | orchestrator | Thursday 16 January 2025 15:14:37 +0000 (0:00:04.478) 0:00:19.858 ****** 2025-01-16 15:16:37.751892 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-01-16 15:16:37.751901 | orchestrator | 2025-01-16 15:16:37.751941 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:16:37.751952 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:16:37.751972 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:16:37.751983 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:16:37.752036 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:16:37.752046 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:16:37.752080 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:16:37.752092 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:16:37.752103 | orchestrator | 2025-01-16 15:16:37.752113 | orchestrator | 2025-01-16 15:16:37.752123 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:16:37.752133 | orchestrator | Thursday 16 January 2025 15:14:41 +0000 (0:00:04.124) 0:00:23.982 ****** 2025-01-16 15:16:37.752143 | orchestrator | =============================================================================== 2025-01-16 15:16:37.752153 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 4.48s 2025-01-16 15:16:37.752489 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 4.32s 2025-01-16 15:16:37.752503 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.12s 2025-01-16 15:16:37.752512 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 2.72s 2025-01-16 15:16:37.752604 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.42s 2025-01-16 15:16:37.752620 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 2.20s 2025-01-16 15:16:37.752631 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.37s 2025-01-16 15:16:37.752640 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.09s 2025-01-16 15:16:37.752649 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-01-16 15:16:37.752659 | orchestrator | 2025-01-16 15:16:37.752668 | orchestrator | 2025-01-16 15:16:37.752677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:16:37.752686 | orchestrator | 2025-01-16 15:16:37.752696 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:16:37.752705 | orchestrator | Thursday 16 January 2025 15:12:57 +0000 (0:00:00.205) 0:00:00.205 ****** 2025-01-16 15:16:37.752714 | orchestrator | ok: [testbed-manager] 2025-01-16 15:16:37.752724 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:16:37.752733 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:16:37.752743 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:16:37.752752 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:16:37.752761 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:16:37.752771 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:16:37.752780 | orchestrator | 2025-01-16 15:16:37.752789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:16:37.752799 | orchestrator | Thursday 16 January 2025 15:12:58 +0000 (0:00:00.562) 0:00:00.768 ****** 2025-01-16 15:16:37.752808 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-01-16 15:16:37.752826 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-01-16 15:16:37.752836 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-01-16 15:16:37.752846 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-01-16 15:16:37.752855 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-01-16 15:16:37.752874 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-01-16 15:16:37.752887 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-01-16 15:16:37.752897 | orchestrator | 2025-01-16 15:16:37.752906 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-01-16 15:16:37.752916 | orchestrator | 2025-01-16 15:16:37.752925 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-01-16 15:16:37.752935 | orchestrator | Thursday 16 January 2025 15:12:58 +0000 (0:00:00.570) 0:00:01.338 ****** 2025-01-16 15:16:37.752945 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:16:37.752955 | orchestrator | 2025-01-16 15:16:37.752964 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-01-16 15:16:37.753224 | orchestrator | Thursday 16 January 2025 15:12:59 +0000 (0:00:00.908) 0:00:02.247 ****** 2025-01-16 15:16:37.753245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.753316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.753337 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-01-16 15:16:37.753375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.753387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.753405 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.753415 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.753425 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.753507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.753527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.753550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.753568 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.753578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.753953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.753969 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.754091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.754159 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-01-16 15:16:37.754171 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.754182 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.754271 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.754282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.754307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.754327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.754357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754419 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.754434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754515 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.754530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.754540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.754550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.754621 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.754676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.754686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.754700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.754734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.754801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'pr2025-01-16 15:16:37 | INFO  | Task ff3f485f-815c-45a4-9c83-586f68aa93ad is in state SUCCESS 2025-01-16 15:16:37.754818 | orchestrator | ometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.754851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.754863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.754873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.754943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.754971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.754989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.755023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.755118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.755144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.755178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.755188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.755207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.755322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.755342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.755352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.755392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.755497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.755517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.755527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.755543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.755637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.755676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.755686 | orchestrator | 2025-01-16 15:16:37.755696 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-01-16 15:16:37.755705 | orchestrator | Thursday 16 January 2025 15:13:02 +0000 (0:00:02.556) 0:00:04.803 ****** 2025-01-16 15:16:37.755715 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:16:37.755725 | orchestrator | 2025-01-16 15:16:37.755734 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-01-16 15:16:37.755744 | orchestrator | Thursday 16 January 2025 15:13:03 +0000 (0:00:01.098) 0:00:05.901 ****** 2025-01-16 15:16:37.755753 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-01-16 15:16:37.755764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.755774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.755854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.755870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.755881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.755893 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.755921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.755932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.755942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.755958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.756053 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756064 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.756083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.756116 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-01-16 15:16:37.756179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.756210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756240 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.756301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.756377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.756391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.756408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.756418 | orchestrator | 2025-01-16 15:16:37.756428 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-01-16 15:16:37.756437 | orchestrator | Thursday 16 January 2025 15:13:07 +0000 (0:00:03.855) 0:00:09.757 ****** 2025-01-16 15:16:37.756447 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.756463 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.756501 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.756567 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.756582 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.756624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.756671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.756744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.756786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756805 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:16:37.756816 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.756825 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.756834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.756854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.756943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.756958 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.756968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.756978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757029 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.757040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.757050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757069 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.757132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.757148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757180 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.757190 | orchestrator | 2025-01-16 15:16:37.757200 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-01-16 15:16:37.757209 | orchestrator | Thursday 16 January 2025 15:13:08 +0000 (0:00:01.499) 0:00:11.256 ****** 2025-01-16 15:16:37.757229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.757240 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.757250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757321 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.757335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757402 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.757412 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757422 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.757530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.757550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757589 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:16:37.757599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757619 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.757629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.757639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.757747 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.757777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.757798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757819 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.757828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.757838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757858 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.757923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-01-16 15:16:37.757956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.757984 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.757993 | orchestrator | 2025-01-16 15:16:37.758003 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-01-16 15:16:37.758038 | orchestrator | Thursday 16 January 2025 15:13:10 +0000 (0:00:01.997) 0:00:13.254 ****** 2025-01-16 15:16:37.758050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.758060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.758071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.758137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.758178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.758190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.758200 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-01-16 15:16:37.758210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.758274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.758299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.758321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.758332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.758361 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.758371 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758440 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.758494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.758525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.758584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.758657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.758674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.758690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.758701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758731 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.758802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.758818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.758834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.758845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.758884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.758962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.758981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.758995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.759006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.759108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.759124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.759139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759150 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.759160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759180 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-01-16 15:16:37.759211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.759248 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.759260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.759270 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.759309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.759319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.759364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.759383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.759393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.759408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.759442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.759453 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.759520 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.759534 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.759560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.759609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.759639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.759659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.759684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.759735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.759748 | orchestrator | 2025-01-16 15:16:37.759758 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-01-16 15:16:37.759769 | orchestrator | Thursday 16 January 2025 15:13:14 +0000 (0:00:04.331) 0:00:17.586 ****** 2025-01-16 15:16:37.759780 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 15:16:37.759791 | orchestrator | 2025-01-16 15:16:37.759801 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-01-16 15:16:37.759811 | orchestrator | Thursday 16 January 2025 15:13:15 +0000 (0:00:00.476) 0:00:18.062 ****** 2025-01-16 15:16:37.759822 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089039, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2003126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759833 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089039, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2003126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759844 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089039, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2003126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759860 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089039, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2003126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759871 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089039, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2003126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759881 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089027, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759924 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089039, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2003126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759938 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089027, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759948 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089027, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759965 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089027, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759976 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088995, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759986 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089027, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.759997 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088995, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760032 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089027, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760048 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089039, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2003126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.760064 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088995, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760080 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088999, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760090 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088999, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760101 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088995, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760112 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088995, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760148 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088995, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760168 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088999, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760178 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089018, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760196 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088999, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760207 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089018, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760216 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088999, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760226 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088999, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760267 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089018, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760279 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089018, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760295 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089007, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760304 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089018, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760314 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089007, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1089027, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.760341 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089007, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760375 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089018, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760387 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089007, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760403 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089007, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760413 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089016, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760422 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089016, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760432 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089016, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760451 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089016, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760499 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089007, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760512 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089016, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760528 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089031, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760539 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089031, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760549 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089031, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760571 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089031, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760581 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089016, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760615 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089031, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760628 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089037, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760643 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089037, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760654 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088995, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.760672 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089037, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760683 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089037, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760693 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089031, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760725 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089037, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760742 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089068, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2043126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760753 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089068, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2043126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760771 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089068, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2043126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760781 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089068, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2043126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760792 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089037, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760802 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089068, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2043126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760835 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089034, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760851 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089034, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760862 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089034, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760880 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089034, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760891 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089068, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2043126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760901 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089034, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760911 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089003, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760944 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089003, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760961 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089003, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.760982 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088999, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.760993 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089034, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761003 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089003, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761014 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089013, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761024 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089013, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761061 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089003, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761074 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089003, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761092 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089013, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761103 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089013, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761113 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089013, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761123 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088994, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761134 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088994, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761171 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088994, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761191 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089013, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761202 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089023, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761212 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088994, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761223 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089023, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088994, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761243 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089023, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761281 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088994, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761300 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1089018, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761311 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089023, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761322 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089065, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2033126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761332 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089065, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2033126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761343 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089023, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761353 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089065, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2033126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761400 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089065, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2033126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761413 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089023, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761423 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089010, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761434 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089065, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2033126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761444 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089010, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761454 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089010, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761487 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089010, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761533 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089043, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2013125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761545 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.761555 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089065, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2033126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761565 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089043, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2013125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761575 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089010, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761584 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.761594 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089043, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2013125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761603 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.761613 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089010, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761636 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089043, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2013125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761646 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.761679 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089043, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2013125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761690 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.761700 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1089007, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761710 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089043, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2013125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-01-16 15:16:37.761719 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.761729 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1089016, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761739 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1089031, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761754 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1089037, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761771 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1089068, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2043126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761804 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1089034, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1993127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761816 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1089003, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761826 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1089013, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761836 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088994, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1963127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761845 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1089023, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1983125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761870 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1089065, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2033126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761880 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1089010, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1973126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761912 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1089043, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.2013125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-01-16 15:16:37.761924 | orchestrator | 2025-01-16 15:16:37.761933 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-01-16 15:16:37.761943 | orchestrator | Thursday 16 January 2025 15:14:15 +0000 (0:00:59.779) 0:01:17.841 ****** 2025-01-16 15:16:37.761952 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 15:16:37.761962 | orchestrator | 2025-01-16 15:16:37.761971 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-01-16 15:16:37.761980 | orchestrator | Thursday 16 January 2025 15:14:15 +0000 (0:00:00.719) 0:01:18.561 ****** 2025-01-16 15:16:37.761990 | orchestrator | [WARNING]: Skipped 2025-01-16 15:16:37.762000 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762009 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-01-16 15:16:37.762049 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762058 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-01-16 15:16:37.762068 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 15:16:37.762078 | orchestrator | [WARNING]: Skipped 2025-01-16 15:16:37.762087 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762096 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-01-16 15:16:37.762106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762115 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-01-16 15:16:37.762124 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:16:37.762140 | orchestrator | [WARNING]: Skipped 2025-01-16 15:16:37.762150 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762164 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-01-16 15:16:37.762173 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762183 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-01-16 15:16:37.762192 | orchestrator | [WARNING]: Skipped 2025-01-16 15:16:37.762201 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762211 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-01-16 15:16:37.762220 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762229 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-01-16 15:16:37.762242 | orchestrator | [WARNING]: Skipped 2025-01-16 15:16:37.762252 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762261 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-01-16 15:16:37.762270 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762280 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-01-16 15:16:37.762289 | orchestrator | [WARNING]: Skipped 2025-01-16 15:16:37.762299 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762308 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-01-16 15:16:37.762317 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762327 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-01-16 15:16:37.762336 | orchestrator | [WARNING]: Skipped 2025-01-16 15:16:37.762345 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762355 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-01-16 15:16:37.762364 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-01-16 15:16:37.762373 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-01-16 15:16:37.762383 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-01-16 15:16:37.762392 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-01-16 15:16:37.762401 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-01-16 15:16:37.762411 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-01-16 15:16:37.762420 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-01-16 15:16:37.762429 | orchestrator | 2025-01-16 15:16:37.762439 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-01-16 15:16:37.762448 | orchestrator | Thursday 16 January 2025 15:14:17 +0000 (0:00:01.476) 0:01:20.037 ****** 2025-01-16 15:16:37.762458 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-01-16 15:16:37.762467 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.762520 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-01-16 15:16:37.762529 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.762539 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-01-16 15:16:37.762548 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.762558 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-01-16 15:16:37.762567 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.762577 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-01-16 15:16:37.762617 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.762628 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-01-16 15:16:37.762638 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.762647 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-01-16 15:16:37.762663 | orchestrator | 2025-01-16 15:16:37.762672 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-01-16 15:16:37.762686 | orchestrator | Thursday 16 January 2025 15:14:28 +0000 (0:00:11.566) 0:01:31.604 ****** 2025-01-16 15:16:37.762695 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-01-16 15:16:37.762703 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.762712 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-01-16 15:16:37.762721 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.762729 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-01-16 15:16:37.762738 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.762746 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-01-16 15:16:37.762755 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.762764 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-01-16 15:16:37.762772 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.762781 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-01-16 15:16:37.762789 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.762798 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-01-16 15:16:37.762807 | orchestrator | 2025-01-16 15:16:37.762815 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-01-16 15:16:37.762824 | orchestrator | Thursday 16 January 2025 15:14:32 +0000 (0:00:03.616) 0:01:35.221 ****** 2025-01-16 15:16:37.762833 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-01-16 15:16:37.762841 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.762850 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-01-16 15:16:37.762859 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.762868 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-01-16 15:16:37.762877 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.762886 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-01-16 15:16:37.762894 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.762903 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-01-16 15:16:37.762911 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.762920 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-01-16 15:16:37.762928 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.762937 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-01-16 15:16:37.762946 | orchestrator | 2025-01-16 15:16:37.762954 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-01-16 15:16:37.762963 | orchestrator | Thursday 16 January 2025 15:14:35 +0000 (0:00:02.459) 0:01:37.680 ****** 2025-01-16 15:16:37.762971 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 15:16:37.762980 | orchestrator | 2025-01-16 15:16:37.762989 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-01-16 15:16:37.762997 | orchestrator | Thursday 16 January 2025 15:14:35 +0000 (0:00:00.338) 0:01:38.019 ****** 2025-01-16 15:16:37.763006 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:16:37.763022 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.763031 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.763040 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.763048 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.763057 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.763065 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.763074 | orchestrator | 2025-01-16 15:16:37.763082 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-01-16 15:16:37.763091 | orchestrator | Thursday 16 January 2025 15:14:35 +0000 (0:00:00.526) 0:01:38.545 ****** 2025-01-16 15:16:37.763100 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:16:37.763108 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.763117 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.763125 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.763134 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:37.763143 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:16:37.763151 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:16:37.763160 | orchestrator | 2025-01-16 15:16:37.763168 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-01-16 15:16:37.763177 | orchestrator | Thursday 16 January 2025 15:14:38 +0000 (0:00:02.521) 0:01:41.066 ****** 2025-01-16 15:16:37.763189 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-01-16 15:16:37.763198 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.763206 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-01-16 15:16:37.763215 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.763224 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-01-16 15:16:37.763232 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.763241 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-01-16 15:16:37.763249 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.763258 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-01-16 15:16:37.763267 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.763275 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-01-16 15:16:37.763284 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.763292 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-01-16 15:16:37.763301 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:16:37.763310 | orchestrator | 2025-01-16 15:16:37.763318 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-01-16 15:16:37.763327 | orchestrator | Thursday 16 January 2025 15:14:41 +0000 (0:00:02.900) 0:01:43.967 ****** 2025-01-16 15:16:37.763335 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-01-16 15:16:37.763344 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.763353 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-01-16 15:16:37.763361 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.763370 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-01-16 15:16:37.763378 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.763387 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-01-16 15:16:37.763395 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.763404 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-01-16 15:16:37.763413 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.763426 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-01-16 15:16:37.763434 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.763447 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-01-16 15:16:37.763456 | orchestrator | 2025-01-16 15:16:37.763464 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-01-16 15:16:37.763484 | orchestrator | Thursday 16 January 2025 15:14:44 +0000 (0:00:03.230) 0:01:47.197 ****** 2025-01-16 15:16:37.763493 | orchestrator | [WARNING]: Skipped 2025-01-16 15:16:37.763502 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-01-16 15:16:37.763510 | orchestrator | due to this access issue: 2025-01-16 15:16:37.763519 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-01-16 15:16:37.763527 | orchestrator | not a directory 2025-01-16 15:16:37.763536 | orchestrator | ok: [testbed-manager -> localhost] 2025-01-16 15:16:37.763545 | orchestrator | 2025-01-16 15:16:37.763553 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-01-16 15:16:37.763562 | orchestrator | Thursday 16 January 2025 15:14:46 +0000 (0:00:02.159) 0:01:49.357 ****** 2025-01-16 15:16:37.763570 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:16:37.763579 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.763587 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.763596 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.763604 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.763613 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.763621 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.763630 | orchestrator | 2025-01-16 15:16:37.763639 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-01-16 15:16:37.763647 | orchestrator | Thursday 16 January 2025 15:14:48 +0000 (0:00:01.400) 0:01:50.757 ****** 2025-01-16 15:16:37.763656 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:16:37.763664 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.763673 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.763681 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.763690 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.763698 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.763706 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.763725 | orchestrator | 2025-01-16 15:16:37.763734 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-01-16 15:16:37.763743 | orchestrator | Thursday 16 January 2025 15:14:49 +0000 (0:00:01.096) 0:01:51.854 ****** 2025-01-16 15:16:37.763751 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-01-16 15:16:37.763760 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.763769 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-01-16 15:16:37.763777 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.763786 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-01-16 15:16:37.763795 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.763813 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-01-16 15:16:37.763823 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.763831 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-01-16 15:16:37.763840 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.763849 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-01-16 15:16:37.763857 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.763866 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-01-16 15:16:37.763880 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:16:37.763889 | orchestrator | 2025-01-16 15:16:37.763897 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-01-16 15:16:37.763906 | orchestrator | Thursday 16 January 2025 15:14:52 +0000 (0:00:03.710) 0:01:55.565 ****** 2025-01-16 15:16:37.763915 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-01-16 15:16:37.763923 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:37.763932 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-01-16 15:16:37.763940 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:37.763949 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-01-16 15:16:37.763957 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:37.763966 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-01-16 15:16:37.763974 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:37.763983 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-01-16 15:16:37.763991 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:37.764000 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-01-16 15:16:37.764009 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:37.764017 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-01-16 15:16:37.764026 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:16:37.764034 | orchestrator | 2025-01-16 15:16:37.764043 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-01-16 15:16:37.764052 | orchestrator | Thursday 16 January 2025 15:14:55 +0000 (0:00:02.908) 0:01:58.474 ****** 2025-01-16 15:16:37.764061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.764070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.764080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.764110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.764120 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-01-16 15:16:37.764129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.764138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.764148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.764168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-01-16 15:16:37.764185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.764194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.764223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.764232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764266 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.764275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.764284 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764293 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.764310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-01-16 15:16:37.764319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.764362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.764371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.764380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.764424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.764437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.764450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.764459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.764506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.764520 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.764529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.764546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.764555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.764569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.764601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.764626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.764635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.764661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.764671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.764687 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-01-16 15:16:37.764881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.764903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.764918 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.764927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.764936 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764945 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.764954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.764976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.764985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.764998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.765007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-01-16 15:16:37.765016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.765026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-01-16 15:16:37.765039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.765049 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.765062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-01-16 15:16:37.765071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.765080 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.765089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.765105 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.765114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-01-16 15:16:37.765123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.765132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-01-16 15:16:37.765145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-01-16 15:16:37.765154 | orchestrator | 2025-01-16 15:16:37.765163 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-01-16 15:16:37.765172 | orchestrator | Thursday 16 January 2025 15:15:00 +0000 (0:00:04.331) 0:02:02.805 ****** 2025-01-16 15:16:37.765181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-01-16 15:16:37.765190 | orchestrator | 2025-01-16 15:16:37.765199 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-01-16 15:16:37.765208 | orchestrator | Thursday 16 January 2025 15:15:03 +0000 (0:00:03.103) 0:02:05.909 ****** 2025-01-16 15:16:37.765216 | orchestrator | 2025-01-16 15:16:37.765225 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-01-16 15:16:37.765234 | orchestrator | Thursday 16 January 2025 15:15:03 +0000 (0:00:00.267) 0:02:06.176 ****** 2025-01-16 15:16:37.765242 | orchestrator | 2025-01-16 15:16:37.765251 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-01-16 15:16:37.765259 | orchestrator | Thursday 16 January 2025 15:15:03 +0000 (0:00:00.050) 0:02:06.227 ****** 2025-01-16 15:16:37.765268 | orchestrator | 2025-01-16 15:16:37.765276 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-01-16 15:16:37.765289 | orchestrator | Thursday 16 January 2025 15:15:03 +0000 (0:00:00.051) 0:02:06.278 ****** 2025-01-16 15:16:37.765298 | orchestrator | 2025-01-16 15:16:37.765306 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-01-16 15:16:37.765315 | orchestrator | Thursday 16 January 2025 15:15:03 +0000 (0:00:00.123) 0:02:06.402 ****** 2025-01-16 15:16:37.765323 | orchestrator | 2025-01-16 15:16:37.765332 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-01-16 15:16:37.765340 | orchestrator | Thursday 16 January 2025 15:15:04 +0000 (0:00:00.329) 0:02:06.732 ****** 2025-01-16 15:16:37.765349 | orchestrator | 2025-01-16 15:16:37.765357 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-01-16 15:16:37.765365 | orchestrator | Thursday 16 January 2025 15:15:04 +0000 (0:00:00.077) 0:02:06.810 ****** 2025-01-16 15:16:37.765374 | orchestrator | 2025-01-16 15:16:37.765382 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-01-16 15:16:37.765391 | orchestrator | Thursday 16 January 2025 15:15:04 +0000 (0:00:00.098) 0:02:06.909 ****** 2025-01-16 15:16:37.765399 | orchestrator | changed: [testbed-manager] 2025-01-16 15:16:37.765408 | orchestrator | 2025-01-16 15:16:37.765416 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-01-16 15:16:37.765425 | orchestrator | Thursday 16 January 2025 15:15:18 +0000 (0:00:14.135) 0:02:21.044 ****** 2025-01-16 15:16:37.765433 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:16:37.765442 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:16:37.765450 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:16:37.765459 | orchestrator | changed: [testbed-manager] 2025-01-16 15:16:37.765480 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:37.765489 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:16:37.765498 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:16:37.765507 | orchestrator | 2025-01-16 15:16:37.765515 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-01-16 15:16:37.765524 | orchestrator | Thursday 16 January 2025 15:15:36 +0000 (0:00:18.159) 0:02:39.203 ****** 2025-01-16 15:16:37.765533 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:16:37.765541 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:16:37.765550 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:37.765558 | orchestrator | 2025-01-16 15:16:37.765567 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-01-16 15:16:37.765576 | orchestrator | Thursday 16 January 2025 15:15:42 +0000 (0:00:06.139) 0:02:45.343 ****** 2025-01-16 15:16:37.765585 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:16:37.765593 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:37.765601 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:16:37.765610 | orchestrator | 2025-01-16 15:16:37.765619 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-01-16 15:16:37.765627 | orchestrator | Thursday 16 January 2025 15:15:48 +0000 (0:00:05.807) 0:02:51.150 ****** 2025-01-16 15:16:37.765636 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:16:37.765644 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:37.765653 | orchestrator | changed: [testbed-manager] 2025-01-16 15:16:37.765662 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:16:37.765670 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:16:37.765679 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:16:37.765687 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:16:37.765696 | orchestrator | 2025-01-16 15:16:37.765705 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-01-16 15:16:37.765714 | orchestrator | Thursday 16 January 2025 15:16:04 +0000 (0:00:16.380) 0:03:07.531 ****** 2025-01-16 15:16:37.765722 | orchestrator | changed: [testbed-manager] 2025-01-16 15:16:37.765732 | orchestrator | 2025-01-16 15:16:37.765745 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-01-16 15:16:37.765754 | orchestrator | Thursday 16 January 2025 15:16:13 +0000 (0:00:08.709) 0:03:16.241 ****** 2025-01-16 15:16:37.765763 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:16:37.765777 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:16:37.765786 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:37.765794 | orchestrator | 2025-01-16 15:16:37.765803 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-01-16 15:16:37.765812 | orchestrator | Thursday 16 January 2025 15:16:24 +0000 (0:00:11.372) 0:03:27.613 ****** 2025-01-16 15:16:37.765825 | orchestrator | changed: [testbed-manager] 2025-01-16 15:16:40.768164 | orchestrator | 2025-01-16 15:16:40.768363 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-01-16 15:16:40.768382 | orchestrator | Thursday 16 January 2025 15:16:30 +0000 (0:00:05.266) 0:03:32.880 ****** 2025-01-16 15:16:40.768393 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:16:40.768405 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:16:40.768411 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:16:40.768417 | orchestrator | 2025-01-16 15:16:40.768423 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:16:40.768431 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-01-16 15:16:40.768439 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-01-16 15:16:40.768445 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-01-16 15:16:40.768451 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-01-16 15:16:40.768458 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-01-16 15:16:40.768464 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-01-16 15:16:40.768507 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-01-16 15:16:40.768514 | orchestrator | 2025-01-16 15:16:40.768520 | orchestrator | 2025-01-16 15:16:40.768526 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:16:40.768532 | orchestrator | Thursday 16 January 2025 15:16:35 +0000 (0:00:04.993) 0:03:37.874 ****** 2025-01-16 15:16:40.768538 | orchestrator | =============================================================================== 2025-01-16 15:16:40.768544 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 59.78s 2025-01-16 15:16:40.768550 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 18.16s 2025-01-16 15:16:40.768556 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.38s 2025-01-16 15:16:40.768562 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.14s 2025-01-16 15:16:40.768568 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 11.57s 2025-01-16 15:16:40.768574 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.37s 2025-01-16 15:16:40.768579 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.71s 2025-01-16 15:16:40.768585 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.14s 2025-01-16 15:16:40.768591 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.81s 2025-01-16 15:16:40.768597 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.27s 2025-01-16 15:16:40.768603 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 4.99s 2025-01-16 15:16:40.768609 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.33s 2025-01-16 15:16:40.768637 | orchestrator | prometheus : Copying over config.json files ----------------------------- 4.33s 2025-01-16 15:16:40.768675 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 3.86s 2025-01-16 15:16:40.768683 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 3.71s 2025-01-16 15:16:40.768692 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.62s 2025-01-16 15:16:40.768702 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.23s 2025-01-16 15:16:40.768711 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 3.10s 2025-01-16 15:16:40.768824 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 2.91s 2025-01-16 15:16:40.768837 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.90s 2025-01-16 15:16:40.768849 | orchestrator | 2025-01-16 15:16:37 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:40.768858 | orchestrator | 2025-01-16 15:16:37 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:40.768867 | orchestrator | 2025-01-16 15:16:37 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state STARTED 2025-01-16 15:16:40.768876 | orchestrator | 2025-01-16 15:16:37 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:16:40.768886 | orchestrator | 2025-01-16 15:16:37 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:40.768910 | orchestrator | 2025-01-16 15:16:40 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:40.769706 | orchestrator | 2025-01-16 15:16:40 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:16:40.769747 | orchestrator | 2025-01-16 15:16:40 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:40.769766 | orchestrator | 2025-01-16 15:16:40 | INFO  | Task 1bfcc09c-6dc9-47a0-8661-00828e0c5f7f is in state SUCCESS 2025-01-16 15:16:40.771000 | orchestrator | 2025-01-16 15:16:40.771030 | orchestrator | 2025-01-16 15:16:40.771040 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:16:40.771064 | orchestrator | 2025-01-16 15:16:40.771072 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:16:40.771087 | orchestrator | Thursday 16 January 2025 15:14:22 +0000 (0:00:00.200) 0:00:00.200 ****** 2025-01-16 15:16:40.771096 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:16:40.771107 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:16:40.771116 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:16:40.771126 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:16:40.771136 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:16:40.771146 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:16:40.771156 | orchestrator | 2025-01-16 15:16:40.771166 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:16:40.771175 | orchestrator | Thursday 16 January 2025 15:14:22 +0000 (0:00:00.419) 0:00:00.619 ****** 2025-01-16 15:16:40.771186 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-01-16 15:16:40.771196 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-01-16 15:16:40.771206 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-01-16 15:16:40.771216 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-01-16 15:16:40.771262 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-01-16 15:16:40.771273 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-01-16 15:16:40.771283 | orchestrator | 2025-01-16 15:16:40.771293 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-01-16 15:16:40.771304 | orchestrator | 2025-01-16 15:16:40.771314 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-01-16 15:16:40.771325 | orchestrator | Thursday 16 January 2025 15:14:23 +0000 (0:00:00.509) 0:00:01.129 ****** 2025-01-16 15:16:40.771350 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:16:40.771362 | orchestrator | 2025-01-16 15:16:40.771373 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-01-16 15:16:40.771383 | orchestrator | Thursday 16 January 2025 15:14:24 +0000 (0:00:00.839) 0:00:01.969 ****** 2025-01-16 15:16:40.771394 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-01-16 15:16:40.771406 | orchestrator | 2025-01-16 15:16:40.771417 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-01-16 15:16:40.771427 | orchestrator | Thursday 16 January 2025 15:14:26 +0000 (0:00:02.235) 0:00:04.204 ****** 2025-01-16 15:16:40.771438 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-01-16 15:16:40.771450 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-01-16 15:16:40.771460 | orchestrator | 2025-01-16 15:16:40.771488 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-01-16 15:16:40.771499 | orchestrator | Thursday 16 January 2025 15:14:30 +0000 (0:00:04.520) 0:00:08.725 ****** 2025-01-16 15:16:40.771509 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:16:40.771520 | orchestrator | 2025-01-16 15:16:40.771529 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-01-16 15:16:40.771539 | orchestrator | Thursday 16 January 2025 15:14:33 +0000 (0:00:02.389) 0:00:11.115 ****** 2025-01-16 15:16:40.771555 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:16:40.771587 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-01-16 15:16:40.771599 | orchestrator | 2025-01-16 15:16:40.771611 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-01-16 15:16:40.771623 | orchestrator | Thursday 16 January 2025 15:14:35 +0000 (0:00:02.635) 0:00:13.750 ****** 2025-01-16 15:16:40.771635 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:16:40.771648 | orchestrator | 2025-01-16 15:16:40.771660 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-01-16 15:16:40.771673 | orchestrator | Thursday 16 January 2025 15:14:38 +0000 (0:00:02.260) 0:00:16.011 ****** 2025-01-16 15:16:40.771684 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-01-16 15:16:40.771697 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-01-16 15:16:40.772179 | orchestrator | 2025-01-16 15:16:40.772204 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-01-16 15:16:40.772217 | orchestrator | Thursday 16 January 2025 15:14:44 +0000 (0:00:06.204) 0:00:22.216 ****** 2025-01-16 15:16:40.772311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.772328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.772351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.772363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.772375 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.772899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.772970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.772996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.773006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.773015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.773024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.773060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.773078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.773087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.773096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.773105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.773115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.773147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.773157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.773166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.773175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.773184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.773215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.773226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.773235 | orchestrator | 2025-01-16 15:16:40.773244 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-01-16 15:16:40.773253 | orchestrator | Thursday 16 January 2025 15:14:46 +0000 (0:00:02.401) 0:00:24.617 ****** 2025-01-16 15:16:40.773264 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.773273 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:40.773281 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:40.773289 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:16:40.773298 | orchestrator | 2025-01-16 15:16:40.773306 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-01-16 15:16:40.773314 | orchestrator | Thursday 16 January 2025 15:14:47 +0000 (0:00:01.090) 0:00:25.708 ****** 2025-01-16 15:16:40.773325 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-01-16 15:16:40.773333 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-01-16 15:16:40.773342 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-01-16 15:16:40.773350 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-01-16 15:16:40.773358 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-01-16 15:16:40.773367 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-01-16 15:16:40.773375 | orchestrator | 2025-01-16 15:16:40.773383 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-01-16 15:16:40.773392 | orchestrator | Thursday 16 January 2025 15:14:52 +0000 (0:00:04.235) 0:00:29.943 ****** 2025-01-16 15:16:40.773402 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-01-16 15:16:40.773422 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-01-16 15:16:40.773453 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-01-16 15:16:40.773465 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-01-16 15:16:40.773517 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-01-16 15:16:40.773526 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-01-16 15:16:40.773537 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-01-16 15:16:40.773576 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-01-16 15:16:40.773588 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-01-16 15:16:40.773598 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-01-16 15:16:40.773606 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-01-16 15:16:40.773620 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-01-16 15:16:40.773629 | orchestrator | 2025-01-16 15:16:40.773638 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-01-16 15:16:40.773647 | orchestrator | Thursday 16 January 2025 15:14:55 +0000 (0:00:03.142) 0:00:33.086 ****** 2025-01-16 15:16:40.773655 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-01-16 15:16:40.773684 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-01-16 15:16:40.773693 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-01-16 15:16:40.773702 | orchestrator | 2025-01-16 15:16:40.773710 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-01-16 15:16:40.773719 | orchestrator | Thursday 16 January 2025 15:14:56 +0000 (0:00:01.413) 0:00:34.500 ****** 2025-01-16 15:16:40.773728 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-01-16 15:16:40.773737 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-01-16 15:16:40.773745 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-01-16 15:16:40.773752 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-01-16 15:16:40.773767 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-01-16 15:16:40.773776 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-01-16 15:16:40.773784 | orchestrator | 2025-01-16 15:16:40.773792 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-01-16 15:16:40.773801 | orchestrator | Thursday 16 January 2025 15:14:59 +0000 (0:00:03.041) 0:00:37.542 ****** 2025-01-16 15:16:40.773810 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-01-16 15:16:40.773818 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-01-16 15:16:40.773826 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-01-16 15:16:40.773835 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-01-16 15:16:40.773843 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-01-16 15:16:40.773852 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-01-16 15:16:40.773860 | orchestrator | 2025-01-16 15:16:40.773867 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-01-16 15:16:40.773876 | orchestrator | Thursday 16 January 2025 15:15:00 +0000 (0:00:00.915) 0:00:38.457 ****** 2025-01-16 15:16:40.773885 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.773894 | orchestrator | 2025-01-16 15:16:40.773902 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-01-16 15:16:40.773910 | orchestrator | Thursday 16 January 2025 15:15:00 +0000 (0:00:00.083) 0:00:38.540 ****** 2025-01-16 15:16:40.773918 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.773927 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:40.773935 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:40.773950 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:40.773959 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:40.773967 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:40.773976 | orchestrator | 2025-01-16 15:16:40.773984 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-01-16 15:16:40.773992 | orchestrator | Thursday 16 January 2025 15:15:01 +0000 (0:00:00.559) 0:00:39.099 ****** 2025-01-16 15:16:40.774001 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:16:40.774011 | orchestrator | 2025-01-16 15:16:40.774054 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-01-16 15:16:40.774063 | orchestrator | Thursday 16 January 2025 15:15:02 +0000 (0:00:01.292) 0:00:40.392 ****** 2025-01-16 15:16:40.774072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.774109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.774120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.774145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774154 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774213 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774224 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774241 | orchestrator | 2025-01-16 15:16:40.774250 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-01-16 15:16:40.774259 | orchestrator | Thursday 16 January 2025 15:15:05 +0000 (0:00:02.699) 0:00:43.091 ****** 2025-01-16 15:16:40.774287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.774298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774307 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.774322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.774332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774340 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:40.774349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.774357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774366 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:40.774393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774419 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:40.774428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774446 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:40.774455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774561 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:40.774569 | orchestrator | 2025-01-16 15:16:40.774577 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-01-16 15:16:40.774585 | orchestrator | Thursday 16 January 2025 15:15:06 +0000 (0:00:01.616) 0:00:44.708 ****** 2025-01-16 15:16:40.774594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.774603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.774621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774630 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.774638 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:40.774667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.774682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774692 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:40.774700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774718 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:40.774726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774770 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:40.774779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774796 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:40.774805 | orchestrator | 2025-01-16 15:16:40.774813 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-01-16 15:16:40.774822 | orchestrator | Thursday 16 January 2025 15:15:08 +0000 (0:00:02.063) 0:00:46.772 ****** 2025-01-16 15:16:40.774830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.774840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.774888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.774907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.774916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.774945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.774961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.774970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.774987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775013 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775133 | orchestrator | 2025-01-16 15:16:40.775139 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-01-16 15:16:40.775144 | orchestrator | Thursday 16 January 2025 15:15:11 +0000 (0:00:02.852) 0:00:49.625 ****** 2025-01-16 15:16:40.775150 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-01-16 15:16:40.775156 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:40.775162 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-01-16 15:16:40.775167 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:40.775173 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-01-16 15:16:40.775179 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-01-16 15:16:40.775184 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:40.775190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-01-16 15:16:40.775195 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-01-16 15:16:40.775201 | orchestrator | 2025-01-16 15:16:40.775206 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-01-16 15:16:40.775211 | orchestrator | Thursday 16 January 2025 15:15:14 +0000 (0:00:02.597) 0:00:52.222 ****** 2025-01-16 15:16:40.775217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.775277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.775294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.775321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775463 | orchestrator | 2025-01-16 15:16:40.775490 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-01-16 15:16:40.775499 | orchestrator | Thursday 16 January 2025 15:15:28 +0000 (0:00:14.521) 0:01:06.744 ****** 2025-01-16 15:16:40.775509 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.775518 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:40.775527 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:40.775536 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:16:40.775546 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:16:40.775553 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:16:40.775561 | orchestrator | 2025-01-16 15:16:40.775569 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-01-16 15:16:40.775576 | orchestrator | Thursday 16 January 2025 15:15:32 +0000 (0:00:03.361) 0:01:10.105 ****** 2025-01-16 15:16:40.775590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775630 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.775638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775691 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:40.775696 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:40.775701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775724 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:40.775734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775757 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:40.775762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775788 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:40.775798 | orchestrator | 2025-01-16 15:16:40.775803 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-01-16 15:16:40.775808 | orchestrator | Thursday 16 January 2025 15:15:34 +0000 (0:00:01.916) 0:01:12.022 ****** 2025-01-16 15:16:40.775813 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.775818 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:40.775822 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:40.775827 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:40.775832 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:40.775837 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:40.775842 | orchestrator | 2025-01-16 15:16:40.775846 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-01-16 15:16:40.775851 | orchestrator | Thursday 16 January 2025 15:15:34 +0000 (0:00:00.843) 0:01:12.866 ****** 2025-01-16 15:16:40.775856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-01-16 15:16:40.775894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.775908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.775918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-01-16 15:16:40.775924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.775988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-01-16 15:16:40.775998 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.776003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.776010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-01-16 15:16:40.776018 | orchestrator | 2025-01-16 15:16:40.776024 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-01-16 15:16:40.776029 | orchestrator | Thursday 16 January 2025 15:15:37 +0000 (0:00:02.297) 0:01:15.164 ****** 2025-01-16 15:16:40.776034 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.776038 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:16:40.776043 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:16:40.776048 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:16:40.776053 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:16:40.776058 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:16:40.776065 | orchestrator | 2025-01-16 15:16:40.776076 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-01-16 15:16:40.776084 | orchestrator | Thursday 16 January 2025 15:15:38 +0000 (0:00:01.106) 0:01:16.270 ****** 2025-01-16 15:16:40.776091 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:40.776098 | orchestrator | 2025-01-16 15:16:40.776106 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-01-16 15:16:40.776113 | orchestrator | Thursday 16 January 2025 15:15:40 +0000 (0:00:02.063) 0:01:18.333 ****** 2025-01-16 15:16:40.776120 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:40.776127 | orchestrator | 2025-01-16 15:16:40.776134 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-01-16 15:16:40.776141 | orchestrator | Thursday 16 January 2025 15:15:41 +0000 (0:00:01.545) 0:01:19.879 ****** 2025-01-16 15:16:40.776148 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:40.776154 | orchestrator | 2025-01-16 15:16:40.776161 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-01-16 15:16:40.776168 | orchestrator | Thursday 16 January 2025 15:15:52 +0000 (0:00:10.063) 0:01:29.943 ****** 2025-01-16 15:16:40.776175 | orchestrator | 2025-01-16 15:16:40.776182 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-01-16 15:16:40.776188 | orchestrator | Thursday 16 January 2025 15:15:52 +0000 (0:00:00.172) 0:01:30.116 ****** 2025-01-16 15:16:40.776195 | orchestrator | 2025-01-16 15:16:40.776203 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-01-16 15:16:40.776211 | orchestrator | Thursday 16 January 2025 15:15:52 +0000 (0:00:00.446) 0:01:30.562 ****** 2025-01-16 15:16:40.776218 | orchestrator | 2025-01-16 15:16:40.776226 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-01-16 15:16:40.776233 | orchestrator | Thursday 16 January 2025 15:15:52 +0000 (0:00:00.124) 0:01:30.686 ****** 2025-01-16 15:16:40.776241 | orchestrator | 2025-01-16 15:16:40.776247 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-01-16 15:16:40.776255 | orchestrator | Thursday 16 January 2025 15:15:52 +0000 (0:00:00.133) 0:01:30.819 ****** 2025-01-16 15:16:40.776262 | orchestrator | 2025-01-16 15:16:40.776270 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-01-16 15:16:40.776278 | orchestrator | Thursday 16 January 2025 15:15:53 +0000 (0:00:00.180) 0:01:31.000 ****** 2025-01-16 15:16:40.776285 | orchestrator | 2025-01-16 15:16:40.776292 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-01-16 15:16:40.776299 | orchestrator | Thursday 16 January 2025 15:15:53 +0000 (0:00:00.624) 0:01:31.624 ****** 2025-01-16 15:16:40.776306 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:40.776313 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:16:40.776320 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:16:40.776327 | orchestrator | 2025-01-16 15:16:40.776334 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-01-16 15:16:40.776342 | orchestrator | Thursday 16 January 2025 15:16:06 +0000 (0:00:12.416) 0:01:44.040 ****** 2025-01-16 15:16:40.776354 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:16:40.776361 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:16:40.776369 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:16:40.776376 | orchestrator | 2025-01-16 15:16:40.776384 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-01-16 15:16:40.776392 | orchestrator | Thursday 16 January 2025 15:16:11 +0000 (0:00:05.273) 0:01:49.314 ****** 2025-01-16 15:16:40.776398 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:16:40.776406 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:16:40.776414 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:16:40.776421 | orchestrator | 2025-01-16 15:16:40.776429 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-01-16 15:16:40.776437 | orchestrator | Thursday 16 January 2025 15:16:28 +0000 (0:00:16.956) 0:02:06.270 ****** 2025-01-16 15:16:40.776445 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:16:40.776453 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:16:40.776461 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:16:40.776509 | orchestrator | 2025-01-16 15:16:40.776519 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-01-16 15:16:40.776527 | orchestrator | Thursday 16 January 2025 15:16:37 +0000 (0:00:08.842) 0:02:15.113 ****** 2025-01-16 15:16:40.776534 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:16:40.776542 | orchestrator | 2025-01-16 15:16:40.776549 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:16:40.776557 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-01-16 15:16:40.776565 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-01-16 15:16:40.776573 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-01-16 15:16:40.776590 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-01-16 15:16:43.794542 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-01-16 15:16:43.794653 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-01-16 15:16:43.794666 | orchestrator | 2025-01-16 15:16:43.794677 | orchestrator | 2025-01-16 15:16:43.794687 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:16:43.794699 | orchestrator | Thursday 16 January 2025 15:16:37 +0000 (0:00:00.439) 0:02:15.552 ****** 2025-01-16 15:16:43.794708 | orchestrator | =============================================================================== 2025-01-16 15:16:43.794733 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 16.96s 2025-01-16 15:16:43.794743 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.52s 2025-01-16 15:16:43.794752 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 12.42s 2025-01-16 15:16:43.794761 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 10.06s 2025-01-16 15:16:43.794771 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.84s 2025-01-16 15:16:43.794780 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.20s 2025-01-16 15:16:43.794789 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.27s 2025-01-16 15:16:43.794798 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 4.52s 2025-01-16 15:16:43.794807 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.24s 2025-01-16 15:16:43.794840 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.36s 2025-01-16 15:16:43.794850 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.14s 2025-01-16 15:16:43.794859 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.04s 2025-01-16 15:16:43.794868 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.85s 2025-01-16 15:16:43.794877 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.70s 2025-01-16 15:16:43.794887 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 2.64s 2025-01-16 15:16:43.794896 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.60s 2025-01-16 15:16:43.794905 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.40s 2025-01-16 15:16:43.794914 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.39s 2025-01-16 15:16:43.794924 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.30s 2025-01-16 15:16:43.794933 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.26s 2025-01-16 15:16:43.794943 | orchestrator | 2025-01-16 15:16:40 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:16:43.794953 | orchestrator | 2025-01-16 15:16:40 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:43.794977 | orchestrator | 2025-01-16 15:16:43 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:43.799976 | orchestrator | 2025-01-16 15:16:43 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:16:43.800063 | orchestrator | 2025-01-16 15:16:43 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:43.801136 | orchestrator | 2025-01-16 15:16:43 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:16:46.823853 | orchestrator | 2025-01-16 15:16:43 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:46.824061 | orchestrator | 2025-01-16 15:16:46 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:49.851959 | orchestrator | 2025-01-16 15:16:46 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:16:49.852050 | orchestrator | 2025-01-16 15:16:46 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:49.852062 | orchestrator | 2025-01-16 15:16:46 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:16:49.852072 | orchestrator | 2025-01-16 15:16:46 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:49.852098 | orchestrator | 2025-01-16 15:16:49 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:49.854805 | orchestrator | 2025-01-16 15:16:49 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:16:49.859053 | orchestrator | 2025-01-16 15:16:49 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:52.883136 | orchestrator | 2025-01-16 15:16:49 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:16:52.883368 | orchestrator | 2025-01-16 15:16:49 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:52.883414 | orchestrator | 2025-01-16 15:16:52 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:52.884222 | orchestrator | 2025-01-16 15:16:52 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:16:52.884258 | orchestrator | 2025-01-16 15:16:52 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:52.884310 | orchestrator | 2025-01-16 15:16:52 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:16:55.901750 | orchestrator | 2025-01-16 15:16:52 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:55.901908 | orchestrator | 2025-01-16 15:16:55 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:55.902220 | orchestrator | 2025-01-16 15:16:55 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:16:55.902261 | orchestrator | 2025-01-16 15:16:55 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:55.902296 | orchestrator | 2025-01-16 15:16:55 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:16:58.920076 | orchestrator | 2025-01-16 15:16:55 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:16:58.920191 | orchestrator | 2025-01-16 15:16:58 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:16:58.920438 | orchestrator | 2025-01-16 15:16:58 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:16:58.920521 | orchestrator | 2025-01-16 15:16:58 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:16:58.920954 | orchestrator | 2025-01-16 15:16:58 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:01.939597 | orchestrator | 2025-01-16 15:16:58 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:01.939716 | orchestrator | 2025-01-16 15:17:01 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:01.940043 | orchestrator | 2025-01-16 15:17:01 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:17:01.940071 | orchestrator | 2025-01-16 15:17:01 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state STARTED 2025-01-16 15:17:01.940722 | orchestrator | 2025-01-16 15:17:01 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:01.940776 | orchestrator | 2025-01-16 15:17:01 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:04.965800 | orchestrator | 2025-01-16 15:17:04 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:04.966653 | orchestrator | 2025-01-16 15:17:04 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:17:04.971636 | orchestrator | 2025-01-16 15:17:04 | INFO  | Task 43b9ebe2-6a06-4ba8-93d7-90f661036dfb is in state SUCCESS 2025-01-16 15:17:04.972854 | orchestrator | 2025-01-16 15:17:04.972910 | orchestrator | 2025-01-16 15:17:04.972923 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:17:04.972992 | orchestrator | 2025-01-16 15:17:04.973005 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:17:04.973015 | orchestrator | Thursday 16 January 2025 15:14:19 +0000 (0:00:00.280) 0:00:00.280 ****** 2025-01-16 15:17:04.973025 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:17:04.973035 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:17:04.973045 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:17:04.973055 | orchestrator | 2025-01-16 15:17:04.973065 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:17:04.973075 | orchestrator | Thursday 16 January 2025 15:14:19 +0000 (0:00:00.264) 0:00:00.544 ****** 2025-01-16 15:17:04.973084 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-01-16 15:17:04.973094 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-01-16 15:17:04.973104 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-01-16 15:17:04.973114 | orchestrator | 2025-01-16 15:17:04.973123 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-01-16 15:17:04.973152 | orchestrator | 2025-01-16 15:17:04.973162 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-01-16 15:17:04.973172 | orchestrator | Thursday 16 January 2025 15:14:20 +0000 (0:00:00.250) 0:00:00.794 ****** 2025-01-16 15:17:04.973181 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:17:04.973192 | orchestrator | 2025-01-16 15:17:04.973201 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-01-16 15:17:04.973211 | orchestrator | Thursday 16 January 2025 15:14:20 +0000 (0:00:00.524) 0:00:01.319 ****** 2025-01-16 15:17:04.973221 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-01-16 15:17:04.973230 | orchestrator | 2025-01-16 15:17:04.973239 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-01-16 15:17:04.973249 | orchestrator | Thursday 16 January 2025 15:14:22 +0000 (0:00:02.271) 0:00:03.590 ****** 2025-01-16 15:17:04.973258 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-01-16 15:17:04.973268 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-01-16 15:17:04.973277 | orchestrator | 2025-01-16 15:17:04.973287 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-01-16 15:17:04.973297 | orchestrator | Thursday 16 January 2025 15:14:27 +0000 (0:00:04.238) 0:00:07.829 ****** 2025-01-16 15:17:04.973306 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:17:04.973316 | orchestrator | 2025-01-16 15:17:04.973326 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-01-16 15:17:04.973335 | orchestrator | Thursday 16 January 2025 15:14:29 +0000 (0:00:02.505) 0:00:10.335 ****** 2025-01-16 15:17:04.973345 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:17:04.973354 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-01-16 15:17:04.973363 | orchestrator | 2025-01-16 15:17:04.973373 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-01-16 15:17:04.973382 | orchestrator | Thursday 16 January 2025 15:14:32 +0000 (0:00:02.737) 0:00:13.072 ****** 2025-01-16 15:17:04.973399 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:17:04.973409 | orchestrator | 2025-01-16 15:17:04.973418 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-01-16 15:17:04.973428 | orchestrator | Thursday 16 January 2025 15:14:34 +0000 (0:00:02.195) 0:00:15.268 ****** 2025-01-16 15:17:04.973437 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-01-16 15:17:04.973447 | orchestrator | 2025-01-16 15:17:04.973456 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-01-16 15:17:04.973487 | orchestrator | Thursday 16 January 2025 15:14:37 +0000 (0:00:02.853) 0:00:18.121 ****** 2025-01-16 15:17:04.973512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.973533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:17:04.973545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.973567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:17:04.973579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.973597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:17:04.973612 | orchestrator | 2025-01-16 15:17:04.973622 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-01-16 15:17:04.973634 | orchestrator | Thursday 16 January 2025 15:14:41 +0000 (0:00:03.853) 0:00:21.975 ****** 2025-01-16 15:17:04.973644 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:17:04.973655 | orchestrator | 2025-01-16 15:17:04.973670 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-01-16 15:17:04.973681 | orchestrator | Thursday 16 January 2025 15:14:41 +0000 (0:00:00.430) 0:00:22.405 ****** 2025-01-16 15:17:04.973691 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:17:04.973702 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:17:04.973713 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:17:04.973723 | orchestrator | 2025-01-16 15:17:04.973734 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-01-16 15:17:04.973745 | orchestrator | Thursday 16 January 2025 15:14:51 +0000 (0:00:09.945) 0:00:32.351 ****** 2025-01-16 15:17:04.973756 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-01-16 15:17:04.973766 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-01-16 15:17:04.973777 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-01-16 15:17:04.973788 | orchestrator | 2025-01-16 15:17:04.973798 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-01-16 15:17:04.973819 | orchestrator | Thursday 16 January 2025 15:14:53 +0000 (0:00:02.018) 0:00:34.369 ****** 2025-01-16 15:17:04.973830 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-01-16 15:17:04.973840 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-01-16 15:17:04.973851 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-01-16 15:17:04.973861 | orchestrator | 2025-01-16 15:17:04.973880 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-01-16 15:17:04.973891 | orchestrator | Thursday 16 January 2025 15:14:54 +0000 (0:00:01.148) 0:00:35.519 ****** 2025-01-16 15:17:04.973906 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:17:04.973917 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:17:04.973928 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:17:04.973938 | orchestrator | 2025-01-16 15:17:04.973948 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-01-16 15:17:04.973959 | orchestrator | Thursday 16 January 2025 15:14:55 +0000 (0:00:00.763) 0:00:36.282 ****** 2025-01-16 15:17:04.973969 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.973980 | orchestrator | 2025-01-16 15:17:04.973991 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-01-16 15:17:04.974001 | orchestrator | Thursday 16 January 2025 15:14:55 +0000 (0:00:00.099) 0:00:36.382 ****** 2025-01-16 15:17:04.974052 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.974064 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.974074 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.974083 | orchestrator | 2025-01-16 15:17:04.974092 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-01-16 15:17:04.974102 | orchestrator | Thursday 16 January 2025 15:14:56 +0000 (0:00:00.293) 0:00:36.676 ****** 2025-01-16 15:17:04.974111 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:17:04.974121 | orchestrator | 2025-01-16 15:17:04.974130 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-01-16 15:17:04.974140 | orchestrator | Thursday 16 January 2025 15:14:57 +0000 (0:00:01.293) 0:00:37.970 ****** 2025-01-16 15:17:04.974157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.974198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.974222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.974241 | orchestrator | 2025-01-16 15:17:04.974251 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-01-16 15:17:04.974261 | orchestrator | Thursday 16 January 2025 15:15:01 +0000 (0:00:03.932) 0:00:41.903 ****** 2025-01-16 15:17:04.974270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-01-16 15:17:04.974291 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.974308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-01-16 15:17:04.974326 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.974336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-01-16 15:17:04.974351 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.974361 | orchestrator | 2025-01-16 15:17:04.974370 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-01-16 15:17:04.974388 | orchestrator | Thursday 16 January 2025 15:15:05 +0000 (0:00:04.107) 0:00:46.011 ****** 2025-01-16 15:17:04.974403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-01-16 15:17:04.974414 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.974423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-01-16 15:17:04.974441 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.974456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-01-16 15:17:04.974508 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.974519 | orchestrator | 2025-01-16 15:17:04.974528 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-01-16 15:17:04.974538 | orchestrator | Thursday 16 January 2025 15:15:09 +0000 (0:00:03.812) 0:00:49.823 ****** 2025-01-16 15:17:04.974547 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.974557 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.974566 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.974576 | orchestrator | 2025-01-16 15:17:04.974585 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-01-16 15:17:04.974595 | orchestrator | Thursday 16 January 2025 15:15:13 +0000 (0:00:04.457) 0:00:54.280 ****** 2025-01-16 15:17:04.974620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.974637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:17:04.974660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.974671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:17:04.974701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.974712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:17:04.974726 | orchestrator | 2025-01-16 15:17:04.974736 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-01-16 15:17:04.974746 | orchestrator | Thursday 16 January 2025 15:15:21 +0000 (0:00:07.865) 0:01:02.146 ****** 2025-01-16 15:17:04.974755 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:17:04.974770 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:17:04.974779 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:17:04.974789 | orchestrator | 2025-01-16 15:17:04.974798 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-01-16 15:17:04.974807 | orchestrator | Thursday 16 January 2025 15:15:36 +0000 (0:00:15.172) 0:01:17.318 ****** 2025-01-16 15:17:04.974817 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.974826 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.974835 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.974844 | orchestrator | 2025-01-16 15:17:04.974853 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-01-16 15:17:04.974863 | orchestrator | Thursday 16 January 2025 15:15:43 +0000 (0:00:06.523) 0:01:23.842 ****** 2025-01-16 15:17:04.974872 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.974881 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.974890 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.974899 | orchestrator | 2025-01-16 15:17:04.974908 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-01-16 15:17:04.974918 | orchestrator | Thursday 16 January 2025 15:15:49 +0000 (0:00:06.059) 0:01:29.901 ****** 2025-01-16 15:17:04.974927 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.974936 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.974945 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.974955 | orchestrator | 2025-01-16 15:17:04.974964 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-01-16 15:17:04.974973 | orchestrator | Thursday 16 January 2025 15:16:00 +0000 (0:00:11.661) 0:01:41.563 ****** 2025-01-16 15:17:04.974982 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.974992 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.975001 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.975010 | orchestrator | 2025-01-16 15:17:04.975019 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-01-16 15:17:04.975033 | orchestrator | Thursday 16 January 2025 15:16:05 +0000 (0:00:04.419) 0:01:45.983 ****** 2025-01-16 15:17:04.975042 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.975052 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.975061 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.975070 | orchestrator | 2025-01-16 15:17:04.975080 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-01-16 15:17:04.975094 | orchestrator | Thursday 16 January 2025 15:16:05 +0000 (0:00:00.270) 0:01:46.253 ****** 2025-01-16 15:17:04.975104 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-01-16 15:17:04.975113 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.975123 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-01-16 15:17:04.975133 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.975142 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-01-16 15:17:04.975152 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.975162 | orchestrator | 2025-01-16 15:17:04.975171 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-01-16 15:17:04.975181 | orchestrator | Thursday 16 January 2025 15:16:10 +0000 (0:00:04.490) 0:01:50.744 ****** 2025-01-16 15:17:04.975191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.975215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:17:04.975231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.975249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:17:04.975270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-01-16 15:17:04.975290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-01-16 15:17:04.975300 | orchestrator | 2025-01-16 15:17:04.975310 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-01-16 15:17:04.975319 | orchestrator | Thursday 16 January 2025 15:16:14 +0000 (0:00:04.728) 0:01:55.472 ****** 2025-01-16 15:17:04.975329 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:17:04.975343 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:17:04.975353 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:17:04.975362 | orchestrator | 2025-01-16 15:17:04.975371 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-01-16 15:17:04.975381 | orchestrator | Thursday 16 January 2025 15:16:15 +0000 (0:00:00.505) 0:01:55.977 ****** 2025-01-16 15:17:04.975390 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:17:04.975399 | orchestrator | 2025-01-16 15:17:04.975414 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-01-16 15:17:04.975423 | orchestrator | Thursday 16 January 2025 15:16:16 +0000 (0:00:01.572) 0:01:57.550 ****** 2025-01-16 15:17:04.975433 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:17:04.975442 | orchestrator | 2025-01-16 15:17:04.975452 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-01-16 15:17:04.975461 | orchestrator | Thursday 16 January 2025 15:16:18 +0000 (0:00:01.884) 0:01:59.435 ****** 2025-01-16 15:17:04.975518 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:17:04.975529 | orchestrator | 2025-01-16 15:17:04.975538 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-01-16 15:17:04.975548 | orchestrator | Thursday 16 January 2025 15:16:20 +0000 (0:00:01.676) 0:02:01.111 ****** 2025-01-16 15:17:04.975557 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:17:04.975567 | orchestrator | 2025-01-16 15:17:04.975577 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-01-16 15:17:04.975591 | orchestrator | Thursday 16 January 2025 15:16:36 +0000 (0:00:15.656) 0:02:16.767 ****** 2025-01-16 15:17:04.975600 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:17:04.975610 | orchestrator | 2025-01-16 15:17:04.975619 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-01-16 15:17:04.975629 | orchestrator | Thursday 16 January 2025 15:16:37 +0000 (0:00:01.440) 0:02:18.208 ****** 2025-01-16 15:17:04.975638 | orchestrator | 2025-01-16 15:17:04.975647 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-01-16 15:17:04.975657 | orchestrator | Thursday 16 January 2025 15:16:37 +0000 (0:00:00.047) 0:02:18.255 ****** 2025-01-16 15:17:04.975666 | orchestrator | 2025-01-16 15:17:04.975676 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-01-16 15:17:04.975685 | orchestrator | Thursday 16 January 2025 15:16:37 +0000 (0:00:00.044) 0:02:18.299 ****** 2025-01-16 15:17:04.975694 | orchestrator | 2025-01-16 15:17:04.975704 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-01-16 15:17:04.975714 | orchestrator | Thursday 16 January 2025 15:16:37 +0000 (0:00:00.145) 0:02:18.445 ****** 2025-01-16 15:17:04.975723 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:17:04.975732 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:17:04.975741 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:17:04.975751 | orchestrator | 2025-01-16 15:17:04.975760 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:17:04.975770 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-01-16 15:17:04.975781 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-01-16 15:17:04.975791 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-01-16 15:17:04.975800 | orchestrator | 2025-01-16 15:17:04.975810 | orchestrator | 2025-01-16 15:17:04.975819 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:17:04.975829 | orchestrator | Thursday 16 January 2025 15:17:02 +0000 (0:00:24.783) 0:02:43.229 ****** 2025-01-16 15:17:04.975838 | orchestrator | =============================================================================== 2025-01-16 15:17:04.975847 | orchestrator | glance : Restart glance-api container ---------------------------------- 24.78s 2025-01-16 15:17:04.975861 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 15.66s 2025-01-16 15:17:04.975871 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 15.17s 2025-01-16 15:17:04.975880 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 11.66s 2025-01-16 15:17:04.975889 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 9.95s 2025-01-16 15:17:04.975898 | orchestrator | glance : Copying over config.json files for services -------------------- 7.87s 2025-01-16 15:17:04.975908 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.52s 2025-01-16 15:17:04.975917 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.06s 2025-01-16 15:17:04.975926 | orchestrator | glance : Check glance containers ---------------------------------------- 4.73s 2025-01-16 15:17:04.975936 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.49s 2025-01-16 15:17:04.975945 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.46s 2025-01-16 15:17:04.975953 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.42s 2025-01-16 15:17:04.975962 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 4.24s 2025-01-16 15:17:04.975971 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.11s 2025-01-16 15:17:04.975980 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.93s 2025-01-16 15:17:04.975989 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.85s 2025-01-16 15:17:04.975998 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.81s 2025-01-16 15:17:04.976007 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 2.85s 2025-01-16 15:17:04.976016 | orchestrator | service-ks-register : glance | Creating users --------------------------- 2.74s 2025-01-16 15:17:04.976028 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 2.51s 2025-01-16 15:17:04.976037 | orchestrator | 2025-01-16 15:17:04 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:04.976050 | orchestrator | 2025-01-16 15:17:04 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:08.018108 | orchestrator | 2025-01-16 15:17:04 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:08.018239 | orchestrator | 2025-01-16 15:17:08 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:08.019257 | orchestrator | 2025-01-16 15:17:08 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:17:08.019289 | orchestrator | 2025-01-16 15:17:08 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:08.019685 | orchestrator | 2025-01-16 15:17:08 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:11.043187 | orchestrator | 2025-01-16 15:17:08 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:11.043339 | orchestrator | 2025-01-16 15:17:11 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:11.044392 | orchestrator | 2025-01-16 15:17:11 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:17:11.044538 | orchestrator | 2025-01-16 15:17:11 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:11.044570 | orchestrator | 2025-01-16 15:17:11 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:14.071637 | orchestrator | 2025-01-16 15:17:11 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:14.071781 | orchestrator | 2025-01-16 15:17:14 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:17.096166 | orchestrator | 2025-01-16 15:17:14 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:17:17.096314 | orchestrator | 2025-01-16 15:17:14 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:17.096335 | orchestrator | 2025-01-16 15:17:14 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:17.096352 | orchestrator | 2025-01-16 15:17:14 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:17.096387 | orchestrator | 2025-01-16 15:17:17 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:20.117924 | orchestrator | 2025-01-16 15:17:17 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state STARTED 2025-01-16 15:17:20.118135 | orchestrator | 2025-01-16 15:17:17 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:20.118191 | orchestrator | 2025-01-16 15:17:17 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:20.118217 | orchestrator | 2025-01-16 15:17:17 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:20.118254 | orchestrator | 2025-01-16 15:17:20 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:23.138771 | orchestrator | 2025-01-16 15:17:20 | INFO  | Task 81658657-b236-472c-8b36-9b836dbb2224 is in state SUCCESS 2025-01-16 15:17:23.138859 | orchestrator | 2025-01-16 15:17:20 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:23.138869 | orchestrator | 2025-01-16 15:17:20 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:23.138879 | orchestrator | 2025-01-16 15:17:20 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:23.138904 | orchestrator | 2025-01-16 15:17:23 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:26.167612 | orchestrator | 2025-01-16 15:17:23 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:26.167706 | orchestrator | 2025-01-16 15:17:23 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:26.167730 | orchestrator | 2025-01-16 15:17:23 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:26.167752 | orchestrator | 2025-01-16 15:17:26 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:29.190392 | orchestrator | 2025-01-16 15:17:26 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:29.190834 | orchestrator | 2025-01-16 15:17:26 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:29.190879 | orchestrator | 2025-01-16 15:17:26 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:29.190932 | orchestrator | 2025-01-16 15:17:29 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:32.214931 | orchestrator | 2025-01-16 15:17:29 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state STARTED 2025-01-16 15:17:32.215066 | orchestrator | 2025-01-16 15:17:29 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:32.215088 | orchestrator | 2025-01-16 15:17:29 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:32.215125 | orchestrator | 2025-01-16 15:17:32 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:35.247919 | orchestrator | 2025-01-16 15:17:32 | INFO  | Task 15084ea0-5098-485e-b9fb-e1d9b889fff9 is in state SUCCESS 2025-01-16 15:17:35.248137 | orchestrator | 2025-01-16 15:17:32 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:35.248184 | orchestrator | 2025-01-16 15:17:32 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:35.248208 | orchestrator | 2025-01-16 15:17:35 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:38.275165 | orchestrator | 2025-01-16 15:17:35 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:38.275293 | orchestrator | 2025-01-16 15:17:35 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:38.275329 | orchestrator | 2025-01-16 15:17:38 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:41.302930 | orchestrator | 2025-01-16 15:17:38 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:41.303120 | orchestrator | 2025-01-16 15:17:38 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:41.303153 | orchestrator | 2025-01-16 15:17:41 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:44.325942 | orchestrator | 2025-01-16 15:17:41 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:44.326109 | orchestrator | 2025-01-16 15:17:41 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:44.326145 | orchestrator | 2025-01-16 15:17:44 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:47.344818 | orchestrator | 2025-01-16 15:17:44 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:47.344952 | orchestrator | 2025-01-16 15:17:44 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:47.345027 | orchestrator | 2025-01-16 15:17:47 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:50.367442 | orchestrator | 2025-01-16 15:17:47 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:50.367587 | orchestrator | 2025-01-16 15:17:47 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:50.367614 | orchestrator | 2025-01-16 15:17:50 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:53.394310 | orchestrator | 2025-01-16 15:17:50 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:53.394409 | orchestrator | 2025-01-16 15:17:50 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:53.394434 | orchestrator | 2025-01-16 15:17:53 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:56.413671 | orchestrator | 2025-01-16 15:17:53 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:56.413804 | orchestrator | 2025-01-16 15:17:53 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:56.413848 | orchestrator | 2025-01-16 15:17:56 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:17:59.441661 | orchestrator | 2025-01-16 15:17:56 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:17:59.442426 | orchestrator | 2025-01-16 15:17:56 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:17:59.442493 | orchestrator | 2025-01-16 15:17:59 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:02.468540 | orchestrator | 2025-01-16 15:17:59 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:18:02.468696 | orchestrator | 2025-01-16 15:17:59 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:02.468755 | orchestrator | 2025-01-16 15:18:02 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:05.503808 | orchestrator | 2025-01-16 15:18:02 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:18:05.503948 | orchestrator | 2025-01-16 15:18:02 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:05.503988 | orchestrator | 2025-01-16 15:18:05 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:08.530787 | orchestrator | 2025-01-16 15:18:05 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:18:08.530930 | orchestrator | 2025-01-16 15:18:05 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:08.531106 | orchestrator | 2025-01-16 15:18:08 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:11.557069 | orchestrator | 2025-01-16 15:18:08 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:18:11.557180 | orchestrator | 2025-01-16 15:18:08 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:11.557212 | orchestrator | 2025-01-16 15:18:11 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:14.583993 | orchestrator | 2025-01-16 15:18:11 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:18:14.584125 | orchestrator | 2025-01-16 15:18:11 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:14.584164 | orchestrator | 2025-01-16 15:18:14 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:14.584670 | orchestrator | 2025-01-16 15:18:14 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state STARTED 2025-01-16 15:18:17.605280 | orchestrator | 2025-01-16 15:18:14 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:17.605411 | orchestrator | 2025-01-16 15:18:17 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:17.607505 | orchestrator | 2025-01-16 15:18:17 | INFO  | Task 08c0856c-5b4e-4c2a-8d07-4a5b89dc4289 is in state SUCCESS 2025-01-16 15:18:17.607551 | orchestrator | 2025-01-16 15:18:17 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:17.607573 | orchestrator | 2025-01-16 15:18:17.607583 | orchestrator | 2025-01-16 15:18:17.607593 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:18:17.607603 | orchestrator | 2025-01-16 15:18:17.607613 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:18:17.607622 | orchestrator | Thursday 16 January 2025 15:16:40 +0000 (0:00:00.205) 0:00:00.205 ****** 2025-01-16 15:18:17.607632 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:18:17.607644 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:18:17.607654 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:18:17.607663 | orchestrator | 2025-01-16 15:18:17.607673 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:18:17.607682 | orchestrator | Thursday 16 January 2025 15:16:40 +0000 (0:00:00.281) 0:00:00.487 ****** 2025-01-16 15:18:17.607692 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-01-16 15:18:17.607702 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-01-16 15:18:17.607712 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-01-16 15:18:17.607721 | orchestrator | 2025-01-16 15:18:17.607731 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-01-16 15:18:17.607740 | orchestrator | 2025-01-16 15:18:17.607749 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-01-16 15:18:17.607759 | orchestrator | Thursday 16 January 2025 15:16:40 +0000 (0:00:00.242) 0:00:00.730 ****** 2025-01-16 15:18:17.607768 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:18:17.607801 | orchestrator | 2025-01-16 15:18:17.607811 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-01-16 15:18:17.607821 | orchestrator | Thursday 16 January 2025 15:16:41 +0000 (0:00:00.611) 0:00:01.342 ****** 2025-01-16 15:18:17.607831 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-01-16 15:18:17.607840 | orchestrator | 2025-01-16 15:18:17.607850 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-01-16 15:18:17.607859 | orchestrator | Thursday 16 January 2025 15:16:43 +0000 (0:00:02.492) 0:00:03.834 ****** 2025-01-16 15:18:17.607868 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-01-16 15:18:17.607878 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-01-16 15:18:17.607984 | orchestrator | 2025-01-16 15:18:17.607996 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-01-16 15:18:17.608006 | orchestrator | Thursday 16 January 2025 15:16:48 +0000 (0:00:04.495) 0:00:08.330 ****** 2025-01-16 15:18:17.608016 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:18:17.608026 | orchestrator | 2025-01-16 15:18:17.608035 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-01-16 15:18:17.608044 | orchestrator | Thursday 16 January 2025 15:16:50 +0000 (0:00:02.383) 0:00:10.713 ****** 2025-01-16 15:18:17.608054 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:18:17.608063 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-01-16 15:18:17.608073 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-01-16 15:18:17.608083 | orchestrator | 2025-01-16 15:18:17.608094 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-01-16 15:18:17.608104 | orchestrator | Thursday 16 January 2025 15:16:56 +0000 (0:00:05.594) 0:00:16.308 ****** 2025-01-16 15:18:17.608114 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:18:17.608125 | orchestrator | 2025-01-16 15:18:17.608135 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-01-16 15:18:17.608145 | orchestrator | Thursday 16 January 2025 15:16:58 +0000 (0:00:02.196) 0:00:18.505 ****** 2025-01-16 15:18:17.608156 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-01-16 15:18:17.608166 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-01-16 15:18:17.608177 | orchestrator | 2025-01-16 15:18:17.608187 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-01-16 15:18:17.608198 | orchestrator | Thursday 16 January 2025 15:17:03 +0000 (0:00:05.048) 0:00:23.554 ****** 2025-01-16 15:18:17.608208 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-01-16 15:18:17.608218 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-01-16 15:18:17.608228 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-01-16 15:18:17.608238 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-01-16 15:18:17.608249 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-01-16 15:18:17.608259 | orchestrator | 2025-01-16 15:18:17.608269 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-01-16 15:18:17.608280 | orchestrator | Thursday 16 January 2025 15:17:14 +0000 (0:00:10.578) 0:00:34.133 ****** 2025-01-16 15:18:17.608290 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:18:17.608301 | orchestrator | 2025-01-16 15:18:17.608310 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-01-16 15:18:17.608319 | orchestrator | Thursday 16 January 2025 15:17:14 +0000 (0:00:00.598) 0:00:34.731 ****** 2025-01-16 15:18:17.608752 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-01-16 15:18:17.608816 | orchestrator | 2025-01-16 15:18:17.608834 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:18:17.608850 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-01-16 15:18:17.608867 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:18:17.608882 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:18:17.608898 | orchestrator | 2025-01-16 15:18:17.608912 | orchestrator | 2025-01-16 15:18:17.608929 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:18:17.608943 | orchestrator | Thursday 16 January 2025 15:17:16 +0000 (0:00:02.354) 0:00:37.086 ****** 2025-01-16 15:18:17.608959 | orchestrator | =============================================================================== 2025-01-16 15:18:17.608976 | orchestrator | octavia : Adding octavia related roles --------------------------------- 10.58s 2025-01-16 15:18:17.609394 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 5.59s 2025-01-16 15:18:17.609430 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 5.05s 2025-01-16 15:18:17.609447 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 4.50s 2025-01-16 15:18:17.609499 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 2.49s 2025-01-16 15:18:17.609517 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 2.38s 2025-01-16 15:18:17.609534 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 2.35s 2025-01-16 15:18:17.609550 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 2.20s 2025-01-16 15:18:17.609566 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.61s 2025-01-16 15:18:17.609582 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.60s 2025-01-16 15:18:17.609597 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-01-16 15:18:17.609613 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.24s 2025-01-16 15:18:17.609628 | orchestrator | 2025-01-16 15:18:17.609642 | orchestrator | 2025-01-16 15:18:17.609657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:18:17.609672 | orchestrator | 2025-01-16 15:18:17.609688 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:18:17.609703 | orchestrator | Thursday 16 January 2025 15:16:37 +0000 (0:00:00.144) 0:00:00.144 ****** 2025-01-16 15:18:17.609718 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:18:17.609736 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:18:17.609751 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:18:17.609766 | orchestrator | 2025-01-16 15:18:17.609780 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:18:17.609795 | orchestrator | Thursday 16 January 2025 15:16:37 +0000 (0:00:00.182) 0:00:00.327 ****** 2025-01-16 15:18:17.609811 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-01-16 15:18:17.609827 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-01-16 15:18:17.609844 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-01-16 15:18:17.609860 | orchestrator | 2025-01-16 15:18:17.609877 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-01-16 15:18:17.609894 | orchestrator | 2025-01-16 15:18:17.609911 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-01-16 15:18:17.609943 | orchestrator | Thursday 16 January 2025 15:16:38 +0000 (0:00:00.499) 0:00:00.826 ****** 2025-01-16 15:18:17.609960 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:18:17.609978 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:18:17.609995 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:18:17.610071 | orchestrator | 2025-01-16 15:18:17.610094 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:18:17.610113 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:18:17.610133 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:18:17.610152 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:18:17.610171 | orchestrator | 2025-01-16 15:18:17.610189 | orchestrator | 2025-01-16 15:18:17.610206 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:18:17.610222 | orchestrator | Thursday 16 January 2025 15:17:29 +0000 (0:00:51.818) 0:00:52.645 ****** 2025-01-16 15:18:17.610239 | orchestrator | =============================================================================== 2025-01-16 15:18:17.610255 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 51.82s 2025-01-16 15:18:17.610271 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-01-16 15:18:17.610288 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.18s 2025-01-16 15:18:17.610304 | orchestrator | 2025-01-16 15:18:17.610321 | orchestrator | 2025-01-16 15:18:17.610338 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:18:17.610354 | orchestrator | 2025-01-16 15:18:17.610371 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:18:17.610436 | orchestrator | Thursday 16 January 2025 15:17:04 +0000 (0:00:00.217) 0:00:00.217 ****** 2025-01-16 15:18:17.610456 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:18:17.610552 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:18:17.610570 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:18:17.610587 | orchestrator | 2025-01-16 15:18:17.610604 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:18:17.610621 | orchestrator | Thursday 16 January 2025 15:17:05 +0000 (0:00:00.290) 0:00:00.508 ****** 2025-01-16 15:18:17.610637 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-01-16 15:18:17.610661 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-01-16 15:18:17.610674 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-01-16 15:18:17.610687 | orchestrator | 2025-01-16 15:18:17.610701 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-01-16 15:18:17.610713 | orchestrator | 2025-01-16 15:18:17.610726 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-01-16 15:18:17.610739 | orchestrator | Thursday 16 January 2025 15:17:05 +0000 (0:00:00.201) 0:00:00.710 ****** 2025-01-16 15:18:17.610753 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:18:17.610767 | orchestrator | 2025-01-16 15:18:17.610781 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-01-16 15:18:17.610793 | orchestrator | Thursday 16 January 2025 15:17:05 +0000 (0:00:00.471) 0:00:01.182 ****** 2025-01-16 15:18:17.610808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.610839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.610857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.610872 | orchestrator | 2025-01-16 15:18:17.610886 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-01-16 15:18:17.610901 | orchestrator | Thursday 16 January 2025 15:17:06 +0000 (0:00:00.540) 0:00:01.722 ****** 2025-01-16 15:18:17.610914 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-01-16 15:18:17.610936 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-01-16 15:18:17.610951 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:18:17.610965 | orchestrator | 2025-01-16 15:18:17.610980 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-01-16 15:18:17.610995 | orchestrator | Thursday 16 January 2025 15:17:06 +0000 (0:00:00.346) 0:00:02.069 ****** 2025-01-16 15:18:17.611009 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:18:17.611023 | orchestrator | 2025-01-16 15:18:17.611038 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-01-16 15:18:17.611052 | orchestrator | Thursday 16 January 2025 15:17:07 +0000 (0:00:00.418) 0:00:02.487 ****** 2025-01-16 15:18:17.611114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.611134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.611159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.611174 | orchestrator | 2025-01-16 15:18:17.611189 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-01-16 15:18:17.611202 | orchestrator | Thursday 16 January 2025 15:17:08 +0000 (0:00:00.870) 0:00:03.358 ****** 2025-01-16 15:18:17.611217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-01-16 15:18:17.611231 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:18:17.611246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-01-16 15:18:17.611260 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:18:17.611274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-01-16 15:18:17.611289 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:18:17.611303 | orchestrator | 2025-01-16 15:18:17.611349 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-01-16 15:18:17.611367 | orchestrator | Thursday 16 January 2025 15:17:08 +0000 (0:00:00.428) 0:00:03.787 ****** 2025-01-16 15:18:17.611382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-01-16 15:18:17.611406 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:18:17.611484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-01-16 15:18:17.611502 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:18:17.611521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-01-16 15:18:17.611536 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:18:17.611551 | orchestrator | 2025-01-16 15:18:17.611565 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-01-16 15:18:17.611580 | orchestrator | Thursday 16 January 2025 15:17:08 +0000 (0:00:00.443) 0:00:04.230 ****** 2025-01-16 15:18:17.611594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.611605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.611642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.611659 | orchestrator | 2025-01-16 15:18:17.611668 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-01-16 15:18:17.611677 | orchestrator | Thursday 16 January 2025 15:17:09 +0000 (0:00:00.874) 0:00:05.105 ****** 2025-01-16 15:18:17.611685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.611707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.611717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.611726 | orchestrator | 2025-01-16 15:18:17.611734 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-01-16 15:18:17.611743 | orchestrator | Thursday 16 January 2025 15:17:10 +0000 (0:00:01.166) 0:00:06.271 ****** 2025-01-16 15:18:17.611752 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:18:17.611765 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:18:17.611774 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:18:17.611782 | orchestrator | 2025-01-16 15:18:17.611791 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-01-16 15:18:17.611799 | orchestrator | Thursday 16 January 2025 15:17:11 +0000 (0:00:00.183) 0:00:06.455 ****** 2025-01-16 15:18:17.611808 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-01-16 15:18:17.611817 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-01-16 15:18:17.611826 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-01-16 15:18:17.611834 | orchestrator | 2025-01-16 15:18:17.611843 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-01-16 15:18:17.611852 | orchestrator | Thursday 16 January 2025 15:17:12 +0000 (0:00:00.955) 0:00:07.410 ****** 2025-01-16 15:18:17.611861 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-01-16 15:18:17.611870 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-01-16 15:18:17.611883 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-01-16 15:18:17.611892 | orchestrator | 2025-01-16 15:18:17.611901 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-01-16 15:18:17.611909 | orchestrator | Thursday 16 January 2025 15:17:13 +0000 (0:00:01.009) 0:00:08.419 ****** 2025-01-16 15:18:17.611918 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:18:17.611927 | orchestrator | 2025-01-16 15:18:17.611954 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-01-16 15:18:17.611963 | orchestrator | Thursday 16 January 2025 15:17:13 +0000 (0:00:00.315) 0:00:08.734 ****** 2025-01-16 15:18:17.611972 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-01-16 15:18:17.611990 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-01-16 15:18:17.611998 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:18:17.612007 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:18:17.612016 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:18:17.612025 | orchestrator | 2025-01-16 15:18:17.612033 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-01-16 15:18:17.612045 | orchestrator | Thursday 16 January 2025 15:17:14 +0000 (0:00:00.620) 0:00:09.355 ****** 2025-01-16 15:18:17.612054 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:18:17.612063 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:18:17.612072 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:18:17.612080 | orchestrator | 2025-01-16 15:18:17.612089 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-01-16 15:18:17.612097 | orchestrator | Thursday 16 January 2025 15:17:14 +0000 (0:00:00.383) 0:00:09.739 ****** 2025-01-16 15:18:17.612106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088819, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1763124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088819, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1763124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088819, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1763124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088798, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1733122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088798, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1733122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088798, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1733122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088782, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1713123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088782, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1713123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088782, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1713123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088811, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1743124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088811, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1743124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088811, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1743124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088766, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1693122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088766, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1693122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088766, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1693122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088789, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1723123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088789, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1723123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088789, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1723123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088807, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1743124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088807, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1743124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088807, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1743124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088764, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1693122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088764, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1693122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088764, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1693122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088736, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1663122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088736, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1663122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088736, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1663122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088770, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1703122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088770, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1703122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088770, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1703122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088752, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1683123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088752, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1683123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088752, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1683123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088802, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1733122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088802, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1733122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088802, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1733122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088777, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1703122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088777, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1703122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088777, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1703122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088815, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1753123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088815, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1753123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088815, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1753123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088760, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1693122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088760, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1693122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088760, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1693122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088794, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1723123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088794, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1723123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088794, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1723123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088745, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1673121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088745, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1673121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088745, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1673121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088757, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1683123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088757, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1683123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088757, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1683123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088781, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1713123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088781, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1713123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088781, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1713123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088899, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1893125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088899, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1893125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088899, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1893125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088874, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1843123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088874, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1843123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088874, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1843123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088826, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1773124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088826, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1773124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.612988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088953, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1933126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088826, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1773124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088953, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1933126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088827, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1773124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088953, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1933126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088827, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1773124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088947, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1923125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088827, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1773124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088947, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1923125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088961, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1943126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088947, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1923125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088961, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1943126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088929, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1903126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088961, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1943126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088929, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1903126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088940, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1913126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088929, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1903126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088940, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1913126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088832, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1783123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088832, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1783123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088940, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1913126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088886, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1843123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088832, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1783123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088886, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1843123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088973, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1943126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088973, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1943126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088886, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1843123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088950, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1923125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088950, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1923125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088973, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1943126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088842, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1803124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088842, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1803124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088950, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1923125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088838, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1783123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088838, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1783123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088842, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1803124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088854, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1803124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088854, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1803124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088838, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1783123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088858, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1833124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088858, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1833124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088854, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1803124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088890, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1853125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088890, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1853125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088858, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1833124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088937, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1913126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088937, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1913126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088890, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1853125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088893, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1853125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088893, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1853125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088937, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1913126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088979, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1953125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088979, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1953125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088893, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1853125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088979, 'dev': 202, 'nlink': 1, 'atime': 1736985777.0, 'mtime': 1736985777.0, 'ctime': 1737038164.1953125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-01-16 15:18:17.613716 | orchestrator | 2025-01-16 15:18:17.613725 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-01-16 15:18:17.613734 | orchestrator | Thursday 16 January 2025 15:17:38 +0000 (0:00:24.346) 0:00:34.085 ****** 2025-01-16 15:18:17.613743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.613759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.613769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-01-16 15:18:17.613778 | orchestrator | 2025-01-16 15:18:17.613791 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-01-16 15:18:17.613800 | orchestrator | Thursday 16 January 2025 15:17:39 +0000 (0:00:00.909) 0:00:34.994 ****** 2025-01-16 15:18:17.613809 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:18:17.613817 | orchestrator | 2025-01-16 15:18:17.613826 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-01-16 15:18:17.613835 | orchestrator | Thursday 16 January 2025 15:17:41 +0000 (0:00:01.808) 0:00:36.803 ****** 2025-01-16 15:18:17.613843 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:18:17.613852 | orchestrator | 2025-01-16 15:18:17.613861 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-01-16 15:18:17.613870 | orchestrator | Thursday 16 January 2025 15:17:43 +0000 (0:00:01.593) 0:00:38.396 ****** 2025-01-16 15:18:17.613878 | orchestrator | 2025-01-16 15:18:17.613887 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-01-16 15:18:17.613899 | orchestrator | Thursday 16 January 2025 15:17:43 +0000 (0:00:00.043) 0:00:38.440 ****** 2025-01-16 15:18:17.613908 | orchestrator | 2025-01-16 15:18:17.613917 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-01-16 15:18:17.613926 | orchestrator | Thursday 16 January 2025 15:17:43 +0000 (0:00:00.042) 0:00:38.482 ****** 2025-01-16 15:18:17.613934 | orchestrator | 2025-01-16 15:18:17.613943 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-01-16 15:18:17.613951 | orchestrator | Thursday 16 January 2025 15:17:43 +0000 (0:00:00.113) 0:00:38.596 ****** 2025-01-16 15:18:17.613960 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:18:17.613969 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:18:17.613977 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:18:17.613986 | orchestrator | 2025-01-16 15:18:17.613994 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-01-16 15:18:17.614003 | orchestrator | Thursday 16 January 2025 15:17:49 +0000 (0:00:06.272) 0:00:44.868 ****** 2025-01-16 15:18:17.614011 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:18:17.614062 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:18:17.614071 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-01-16 15:18:17.614080 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:18:17.614089 | orchestrator | 2025-01-16 15:18:17.614097 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-01-16 15:18:17.614106 | orchestrator | Thursday 16 January 2025 15:18:02 +0000 (0:00:13.331) 0:00:58.199 ****** 2025-01-16 15:18:17.614114 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:18:17.614123 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:18:17.614131 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:18:17.614140 | orchestrator | 2025-01-16 15:18:17.614149 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-01-16 15:18:17.614157 | orchestrator | Thursday 16 January 2025 15:18:10 +0000 (0:00:07.359) 0:01:05.558 ****** 2025-01-16 15:18:17.614166 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:18:17.614174 | orchestrator | 2025-01-16 15:18:17.614183 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-01-16 15:18:17.614191 | orchestrator | Thursday 16 January 2025 15:18:11 +0000 (0:00:01.721) 0:01:07.280 ****** 2025-01-16 15:18:17.614200 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:18:17.614208 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:18:17.614217 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:18:17.614225 | orchestrator | 2025-01-16 15:18:17.614234 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-01-16 15:18:17.614243 | orchestrator | Thursday 16 January 2025 15:18:12 +0000 (0:00:00.249) 0:01:07.529 ****** 2025-01-16 15:18:17.614252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-01-16 15:18:17.614271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-01-16 15:18:17.614280 | orchestrator | 2025-01-16 15:18:17.614289 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-01-16 15:18:17.614298 | orchestrator | Thursday 16 January 2025 15:18:14 +0000 (0:00:01.865) 0:01:09.394 ****** 2025-01-16 15:18:17.614307 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:18:17.614315 | orchestrator | 2025-01-16 15:18:17.614324 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:18:17.614333 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-01-16 15:18:17.614343 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-01-16 15:18:17.614352 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-01-16 15:18:17.614361 | orchestrator | 2025-01-16 15:18:17.614369 | orchestrator | 2025-01-16 15:18:17.614378 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:18:17.614386 | orchestrator | Thursday 16 January 2025 15:18:14 +0000 (0:00:00.622) 0:01:10.017 ****** 2025-01-16 15:18:17.614399 | orchestrator | =============================================================================== 2025-01-16 15:18:17.614408 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 24.35s 2025-01-16 15:18:17.614417 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 13.33s 2025-01-16 15:18:17.614425 | orchestrator | grafana : Restart remaining grafana containers -------------------------- 7.36s 2025-01-16 15:18:17.614434 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.27s 2025-01-16 15:18:17.614443 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 1.87s 2025-01-16 15:18:17.614451 | orchestrator | grafana : Creating grafana database ------------------------------------- 1.81s 2025-01-16 15:18:17.614510 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.72s 2025-01-16 15:18:17.614521 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 1.59s 2025-01-16 15:18:17.614535 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.17s 2025-01-16 15:18:20.634175 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.01s 2025-01-16 15:18:20.634301 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 0.96s 2025-01-16 15:18:20.634320 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.91s 2025-01-16 15:18:20.634335 | orchestrator | grafana : Copying over config.json files -------------------------------- 0.87s 2025-01-16 15:18:20.634349 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 0.87s 2025-01-16 15:18:20.634363 | orchestrator | grafana : Disable Getting Started panel --------------------------------- 0.62s 2025-01-16 15:18:20.634378 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.62s 2025-01-16 15:18:20.634392 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.54s 2025-01-16 15:18:20.634406 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.47s 2025-01-16 15:18:20.634420 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.44s 2025-01-16 15:18:20.634434 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.43s 2025-01-16 15:18:20.634516 | orchestrator | 2025-01-16 15:18:20 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:23.653539 | orchestrator | 2025-01-16 15:18:20 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:23.653725 | orchestrator | 2025-01-16 15:18:23 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:26.675934 | orchestrator | 2025-01-16 15:18:23 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:26.676042 | orchestrator | 2025-01-16 15:18:26 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:29.696619 | orchestrator | 2025-01-16 15:18:26 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:29.696843 | orchestrator | 2025-01-16 15:18:29 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:32.715194 | orchestrator | 2025-01-16 15:18:29 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:32.715317 | orchestrator | 2025-01-16 15:18:32 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:35.738532 | orchestrator | 2025-01-16 15:18:32 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:35.738681 | orchestrator | 2025-01-16 15:18:35 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:38.759311 | orchestrator | 2025-01-16 15:18:35 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:38.759543 | orchestrator | 2025-01-16 15:18:38 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:41.775924 | orchestrator | 2025-01-16 15:18:38 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:41.776050 | orchestrator | 2025-01-16 15:18:41 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:44.805863 | orchestrator | 2025-01-16 15:18:41 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:44.805960 | orchestrator | 2025-01-16 15:18:44 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:47.823832 | orchestrator | 2025-01-16 15:18:44 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:47.824004 | orchestrator | 2025-01-16 15:18:47 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:50.853224 | orchestrator | 2025-01-16 15:18:47 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:50.853341 | orchestrator | 2025-01-16 15:18:50 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:53.874725 | orchestrator | 2025-01-16 15:18:50 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:53.874874 | orchestrator | 2025-01-16 15:18:53 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:56.898248 | orchestrator | 2025-01-16 15:18:53 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:56.898392 | orchestrator | 2025-01-16 15:18:56 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:18:59.931710 | orchestrator | 2025-01-16 15:18:56 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:18:59.931826 | orchestrator | 2025-01-16 15:18:59 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:02.957045 | orchestrator | 2025-01-16 15:18:59 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:02.957167 | orchestrator | 2025-01-16 15:19:02 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:05.982779 | orchestrator | 2025-01-16 15:19:02 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:05.982991 | orchestrator | 2025-01-16 15:19:05 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:09.003016 | orchestrator | 2025-01-16 15:19:05 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:09.003200 | orchestrator | 2025-01-16 15:19:08 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:12.019848 | orchestrator | 2025-01-16 15:19:08 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:12.019969 | orchestrator | 2025-01-16 15:19:12 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:15.036996 | orchestrator | 2025-01-16 15:19:12 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:15.037136 | orchestrator | 2025-01-16 15:19:15 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:18.054872 | orchestrator | 2025-01-16 15:19:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:18.055008 | orchestrator | 2025-01-16 15:19:18 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:21.071823 | orchestrator | 2025-01-16 15:19:18 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:21.071922 | orchestrator | 2025-01-16 15:19:21 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:24.096420 | orchestrator | 2025-01-16 15:19:21 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:24.096672 | orchestrator | 2025-01-16 15:19:24 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:27.116103 | orchestrator | 2025-01-16 15:19:24 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:27.116208 | orchestrator | 2025-01-16 15:19:27 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:30.132956 | orchestrator | 2025-01-16 15:19:27 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:30.133092 | orchestrator | 2025-01-16 15:19:30 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:33.152157 | orchestrator | 2025-01-16 15:19:30 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:33.152298 | orchestrator | 2025-01-16 15:19:33 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:36.169996 | orchestrator | 2025-01-16 15:19:33 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:36.170180 | orchestrator | 2025-01-16 15:19:36 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:39.194130 | orchestrator | 2025-01-16 15:19:36 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:39.194271 | orchestrator | 2025-01-16 15:19:39 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:42.214082 | orchestrator | 2025-01-16 15:19:39 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:42.214261 | orchestrator | 2025-01-16 15:19:42 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:45.232514 | orchestrator | 2025-01-16 15:19:42 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:45.232655 | orchestrator | 2025-01-16 15:19:45 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:48.265384 | orchestrator | 2025-01-16 15:19:45 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:48.265571 | orchestrator | 2025-01-16 15:19:48 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:51.290641 | orchestrator | 2025-01-16 15:19:48 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:51.290824 | orchestrator | 2025-01-16 15:19:51 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:54.312023 | orchestrator | 2025-01-16 15:19:51 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:54.312186 | orchestrator | 2025-01-16 15:19:54 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:19:57.330142 | orchestrator | 2025-01-16 15:19:54 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:19:57.330276 | orchestrator | 2025-01-16 15:19:57 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:20:00.355506 | orchestrator | 2025-01-16 15:19:57 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:20:00.355656 | orchestrator | 2025-01-16 15:20:00 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:20:03.374735 | orchestrator | 2025-01-16 15:20:00 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:20:03.374889 | orchestrator | 2025-01-16 15:20:03 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:20:06.393320 | orchestrator | 2025-01-16 15:20:03 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:20:06.393436 | orchestrator | 2025-01-16 15:20:06 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:20:09.418091 | orchestrator | 2025-01-16 15:20:06 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:20:09.418256 | orchestrator | 2025-01-16 15:20:09 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:20:12.434621 | orchestrator | 2025-01-16 15:20:09 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:20:12.434734 | orchestrator | 2025-01-16 15:20:12 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:20:15.454761 | orchestrator | 2025-01-16 15:20:12 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:20:15.454913 | orchestrator | 2025-01-16 15:20:15 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:20:18.478159 | orchestrator | 2025-01-16 15:20:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:20:18.478365 | orchestrator | 2025-01-16 15:20:18 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state STARTED 2025-01-16 15:20:21.497866 | orchestrator | 2025-01-16 15:20:18 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:20:21.498006 | orchestrator | 2025-01-16 15:20:21 | INFO  | Task ad9d8b5f-f6c1-4dd0-b3c9-95ab72554544 is in state SUCCESS 2025-01-16 15:20:21.499087 | orchestrator | 2025-01-16 15:20:21.499125 | orchestrator | 2025-01-16 15:20:21.499141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:20:21.499156 | orchestrator | 2025-01-16 15:20:21.499171 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-01-16 15:20:21.499185 | orchestrator | Thursday 16 January 2025 15:14:47 +0000 (0:00:00.375) 0:00:00.375 ****** 2025-01-16 15:20:21.500281 | orchestrator | changed: [testbed-manager] 2025-01-16 15:20:21.500300 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.500315 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:20:21.500329 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:20:21.500343 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.500357 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.500372 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.500386 | orchestrator | 2025-01-16 15:20:21.500401 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:20:21.500415 | orchestrator | Thursday 16 January 2025 15:14:48 +0000 (0:00:01.005) 0:00:01.381 ****** 2025-01-16 15:20:21.500430 | orchestrator | changed: [testbed-manager] 2025-01-16 15:20:21.500533 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.500551 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:20:21.500565 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:20:21.500579 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.500593 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.500608 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.500621 | orchestrator | 2025-01-16 15:20:21.500636 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:20:21.500650 | orchestrator | Thursday 16 January 2025 15:14:50 +0000 (0:00:01.967) 0:00:03.348 ****** 2025-01-16 15:20:21.501543 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-01-16 15:20:21.501564 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-01-16 15:20:21.501579 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-01-16 15:20:21.501595 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-01-16 15:20:21.501928 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-01-16 15:20:21.501954 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-01-16 15:20:21.501969 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-01-16 15:20:21.501983 | orchestrator | 2025-01-16 15:20:21.501997 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-01-16 15:20:21.502012 | orchestrator | 2025-01-16 15:20:21.502064 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-01-16 15:20:21.502078 | orchestrator | Thursday 16 January 2025 15:14:52 +0000 (0:00:01.948) 0:00:05.297 ****** 2025-01-16 15:20:21.502092 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:20:21.502106 | orchestrator | 2025-01-16 15:20:21.502121 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-01-16 15:20:21.502136 | orchestrator | Thursday 16 January 2025 15:14:53 +0000 (0:00:01.005) 0:00:06.303 ****** 2025-01-16 15:20:21.502151 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-01-16 15:20:21.502165 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-01-16 15:20:21.502179 | orchestrator | 2025-01-16 15:20:21.502193 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-01-16 15:20:21.502207 | orchestrator | Thursday 16 January 2025 15:14:56 +0000 (0:00:03.096) 0:00:09.400 ****** 2025-01-16 15:20:21.502221 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-01-16 15:20:21.502236 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-01-16 15:20:21.502250 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.502264 | orchestrator | 2025-01-16 15:20:21.502279 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-01-16 15:20:21.502884 | orchestrator | Thursday 16 January 2025 15:14:59 +0000 (0:00:03.423) 0:00:12.823 ****** 2025-01-16 15:20:21.502913 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.502929 | orchestrator | 2025-01-16 15:20:21.502962 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-01-16 15:20:21.502978 | orchestrator | Thursday 16 January 2025 15:15:00 +0000 (0:00:00.464) 0:00:13.287 ****** 2025-01-16 15:20:21.502993 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.503009 | orchestrator | 2025-01-16 15:20:21.503024 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-01-16 15:20:21.503040 | orchestrator | Thursday 16 January 2025 15:15:01 +0000 (0:00:01.120) 0:00:14.408 ****** 2025-01-16 15:20:21.503054 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.503067 | orchestrator | 2025-01-16 15:20:21.503081 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-01-16 15:20:21.503095 | orchestrator | Thursday 16 January 2025 15:15:04 +0000 (0:00:02.894) 0:00:17.303 ****** 2025-01-16 15:20:21.503108 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.503122 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.503136 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.503165 | orchestrator | 2025-01-16 15:20:21.503179 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-01-16 15:20:21.503193 | orchestrator | Thursday 16 January 2025 15:15:04 +0000 (0:00:00.566) 0:00:17.869 ****** 2025-01-16 15:20:21.503207 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:20:21.503221 | orchestrator | 2025-01-16 15:20:21.503234 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-01-16 15:20:21.503248 | orchestrator | Thursday 16 January 2025 15:15:23 +0000 (0:00:18.506) 0:00:36.375 ****** 2025-01-16 15:20:21.503262 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.503275 | orchestrator | 2025-01-16 15:20:21.503289 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-01-16 15:20:21.503302 | orchestrator | Thursday 16 January 2025 15:15:33 +0000 (0:00:10.261) 0:00:46.636 ****** 2025-01-16 15:20:21.503316 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:20:21.503330 | orchestrator | 2025-01-16 15:20:21.503344 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-01-16 15:20:21.503357 | orchestrator | Thursday 16 January 2025 15:15:41 +0000 (0:00:07.783) 0:00:54.420 ****** 2025-01-16 15:20:21.503480 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:20:21.503502 | orchestrator | 2025-01-16 15:20:21.503516 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-01-16 15:20:21.503529 | orchestrator | Thursday 16 January 2025 15:15:41 +0000 (0:00:00.597) 0:00:55.017 ****** 2025-01-16 15:20:21.503543 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.503557 | orchestrator | 2025-01-16 15:20:21.503571 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-01-16 15:20:21.503585 | orchestrator | Thursday 16 January 2025 15:15:42 +0000 (0:00:00.329) 0:00:55.347 ****** 2025-01-16 15:20:21.503600 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:20:21.503614 | orchestrator | 2025-01-16 15:20:21.503627 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-01-16 15:20:21.503641 | orchestrator | Thursday 16 January 2025 15:15:42 +0000 (0:00:00.716) 0:00:56.063 ****** 2025-01-16 15:20:21.503655 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:20:21.503680 | orchestrator | 2025-01-16 15:20:21.503694 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-01-16 15:20:21.503708 | orchestrator | Thursday 16 January 2025 15:15:52 +0000 (0:00:09.612) 0:01:05.675 ****** 2025-01-16 15:20:21.503722 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.503737 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.503751 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.503764 | orchestrator | 2025-01-16 15:20:21.503778 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-01-16 15:20:21.503792 | orchestrator | 2025-01-16 15:20:21.503806 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-01-16 15:20:21.503820 | orchestrator | Thursday 16 January 2025 15:15:52 +0000 (0:00:00.528) 0:01:06.203 ****** 2025-01-16 15:20:21.503833 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:20:21.503847 | orchestrator | 2025-01-16 15:20:21.503864 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-01-16 15:20:21.503878 | orchestrator | Thursday 16 January 2025 15:15:55 +0000 (0:00:02.292) 0:01:08.496 ****** 2025-01-16 15:20:21.503891 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.503905 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.503918 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.503932 | orchestrator | 2025-01-16 15:20:21.503945 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-01-16 15:20:21.503958 | orchestrator | Thursday 16 January 2025 15:15:57 +0000 (0:00:02.280) 0:01:10.776 ****** 2025-01-16 15:20:21.503972 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.503985 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.503998 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.504020 | orchestrator | 2025-01-16 15:20:21.504033 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-01-16 15:20:21.504046 | orchestrator | Thursday 16 January 2025 15:15:59 +0000 (0:00:01.743) 0:01:12.519 ****** 2025-01-16 15:20:21.504060 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.504073 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.504088 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.504102 | orchestrator | 2025-01-16 15:20:21.504117 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-01-16 15:20:21.504131 | orchestrator | Thursday 16 January 2025 15:15:59 +0000 (0:00:00.514) 0:01:13.034 ****** 2025-01-16 15:20:21.504146 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-01-16 15:20:21.504160 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.504175 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-01-16 15:20:21.504190 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.504204 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-01-16 15:20:21.504219 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-01-16 15:20:21.504234 | orchestrator | 2025-01-16 15:20:21.504249 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-01-16 15:20:21.504263 | orchestrator | Thursday 16 January 2025 15:16:05 +0000 (0:00:05.254) 0:01:18.289 ****** 2025-01-16 15:20:21.504278 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.504293 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.504307 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.504321 | orchestrator | 2025-01-16 15:20:21.504336 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-01-16 15:20:21.504351 | orchestrator | Thursday 16 January 2025 15:16:05 +0000 (0:00:00.334) 0:01:18.623 ****** 2025-01-16 15:20:21.504366 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-01-16 15:20:21.504381 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.504395 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-01-16 15:20:21.504409 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.504423 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-01-16 15:20:21.504437 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.504469 | orchestrator | 2025-01-16 15:20:21.504484 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-01-16 15:20:21.504497 | orchestrator | Thursday 16 January 2025 15:16:06 +0000 (0:00:00.849) 0:01:19.472 ****** 2025-01-16 15:20:21.504511 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.504524 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.504538 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.504551 | orchestrator | 2025-01-16 15:20:21.504565 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-01-16 15:20:21.504578 | orchestrator | Thursday 16 January 2025 15:16:06 +0000 (0:00:00.493) 0:01:19.965 ****** 2025-01-16 15:20:21.504592 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.504605 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.504618 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.504632 | orchestrator | 2025-01-16 15:20:21.504646 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-01-16 15:20:21.504659 | orchestrator | Thursday 16 January 2025 15:16:07 +0000 (0:00:00.795) 0:01:20.761 ****** 2025-01-16 15:20:21.504673 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.504686 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.504779 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.504799 | orchestrator | 2025-01-16 15:20:21.504813 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-01-16 15:20:21.504827 | orchestrator | Thursday 16 January 2025 15:16:10 +0000 (0:00:02.551) 0:01:23.313 ****** 2025-01-16 15:20:21.504840 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.504853 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.504874 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:20:21.504888 | orchestrator | 2025-01-16 15:20:21.504902 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-01-16 15:20:21.504915 | orchestrator | Thursday 16 January 2025 15:16:24 +0000 (0:00:14.745) 0:01:38.058 ****** 2025-01-16 15:20:21.504929 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.504942 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.504956 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:20:21.504969 | orchestrator | 2025-01-16 15:20:21.504982 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-01-16 15:20:21.504996 | orchestrator | Thursday 16 January 2025 15:16:32 +0000 (0:00:07.368) 0:01:45.426 ****** 2025-01-16 15:20:21.505009 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:20:21.505023 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.505036 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.505049 | orchestrator | 2025-01-16 15:20:21.505063 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-01-16 15:20:21.505076 | orchestrator | Thursday 16 January 2025 15:16:33 +0000 (0:00:00.987) 0:01:46.414 ****** 2025-01-16 15:20:21.505090 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.505103 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.505116 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.505130 | orchestrator | 2025-01-16 15:20:21.505143 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-01-16 15:20:21.505156 | orchestrator | Thursday 16 January 2025 15:16:41 +0000 (0:00:07.880) 0:01:54.294 ****** 2025-01-16 15:20:21.505169 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.505183 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.505196 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.505209 | orchestrator | 2025-01-16 15:20:21.505222 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-01-16 15:20:21.505236 | orchestrator | Thursday 16 January 2025 15:16:42 +0000 (0:00:01.335) 0:01:55.630 ****** 2025-01-16 15:20:21.505269 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.505283 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.505297 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.505310 | orchestrator | 2025-01-16 15:20:21.505323 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-01-16 15:20:21.505337 | orchestrator | 2025-01-16 15:20:21.505357 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-01-16 15:20:21.505376 | orchestrator | Thursday 16 January 2025 15:16:42 +0000 (0:00:00.302) 0:01:55.932 ****** 2025-01-16 15:20:21.505390 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:20:21.505406 | orchestrator | 2025-01-16 15:20:21.505420 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-01-16 15:20:21.505435 | orchestrator | Thursday 16 January 2025 15:16:43 +0000 (0:00:00.417) 0:01:56.350 ****** 2025-01-16 15:20:21.505500 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-01-16 15:20:21.505516 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-01-16 15:20:21.505529 | orchestrator | 2025-01-16 15:20:21.505543 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-01-16 15:20:21.505556 | orchestrator | Thursday 16 January 2025 15:16:45 +0000 (0:00:02.342) 0:01:58.692 ****** 2025-01-16 15:20:21.505571 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-01-16 15:20:21.505586 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-01-16 15:20:21.505600 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-01-16 15:20:21.505614 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-01-16 15:20:21.505634 | orchestrator | 2025-01-16 15:20:21.505647 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-01-16 15:20:21.505660 | orchestrator | Thursday 16 January 2025 15:16:49 +0000 (0:00:04.426) 0:02:03.119 ****** 2025-01-16 15:20:21.505673 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-01-16 15:20:21.505685 | orchestrator | 2025-01-16 15:20:21.505698 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-01-16 15:20:21.505710 | orchestrator | Thursday 16 January 2025 15:16:52 +0000 (0:00:02.210) 0:02:05.330 ****** 2025-01-16 15:20:21.505723 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-01-16 15:20:21.505735 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-01-16 15:20:21.505748 | orchestrator | 2025-01-16 15:20:21.505760 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-01-16 15:20:21.505772 | orchestrator | Thursday 16 January 2025 15:16:54 +0000 (0:00:02.633) 0:02:07.963 ****** 2025-01-16 15:20:21.505783 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-01-16 15:20:21.505793 | orchestrator | 2025-01-16 15:20:21.505803 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-01-16 15:20:21.505813 | orchestrator | Thursday 16 January 2025 15:16:57 +0000 (0:00:02.333) 0:02:10.296 ****** 2025-01-16 15:20:21.505824 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-01-16 15:20:21.505834 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-01-16 15:20:21.505844 | orchestrator | 2025-01-16 15:20:21.505855 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-01-16 15:20:21.505931 | orchestrator | Thursday 16 January 2025 15:17:02 +0000 (0:00:05.208) 0:02:15.504 ****** 2025-01-16 15:20:21.505949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.505964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.506057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.506135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.506152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.506165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.506177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.506189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.506220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.506232 | orchestrator | 2025-01-16 15:20:21.506243 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-01-16 15:20:21.506255 | orchestrator | Thursday 16 January 2025 15:17:03 +0000 (0:00:01.027) 0:02:16.531 ****** 2025-01-16 15:20:21.506266 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.506278 | orchestrator | 2025-01-16 15:20:21.506289 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-01-16 15:20:21.506300 | orchestrator | Thursday 16 January 2025 15:17:03 +0000 (0:00:00.157) 0:02:16.689 ****** 2025-01-16 15:20:21.506311 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.506326 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.506338 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.506349 | orchestrator | 2025-01-16 15:20:21.506360 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-01-16 15:20:21.506371 | orchestrator | Thursday 16 January 2025 15:17:03 +0000 (0:00:00.179) 0:02:16.868 ****** 2025-01-16 15:20:21.506382 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-01-16 15:20:21.506393 | orchestrator | 2025-01-16 15:20:21.506421 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-01-16 15:20:21.506510 | orchestrator | Thursday 16 January 2025 15:17:04 +0000 (0:00:00.361) 0:02:17.230 ****** 2025-01-16 15:20:21.506527 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.506539 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.506550 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.506562 | orchestrator | 2025-01-16 15:20:21.506573 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-01-16 15:20:21.506584 | orchestrator | Thursday 16 January 2025 15:17:04 +0000 (0:00:00.197) 0:02:17.427 ****** 2025-01-16 15:20:21.506596 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:20:21.506607 | orchestrator | 2025-01-16 15:20:21.506618 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-01-16 15:20:21.506629 | orchestrator | Thursday 16 January 2025 15:17:04 +0000 (0:00:00.494) 0:02:17.922 ****** 2025-01-16 15:20:21.506641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.506661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.506740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.506757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.506769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.506788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.506800 | orchestrator | 2025-01-16 15:20:21.506811 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-01-16 15:20:21.506823 | orchestrator | Thursday 16 January 2025 15:17:06 +0000 (0:00:01.683) 0:02:19.606 ****** 2025-01-16 15:20:21.506846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:20:21.506859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.506871 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.506937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:20:21.506960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.506971 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.506982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:20:21.507018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.507032 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.507043 | orchestrator | 2025-01-16 15:20:21.507054 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-01-16 15:20:21.507065 | orchestrator | Thursday 16 January 2025 15:17:06 +0000 (0:00:00.391) 0:02:19.998 ****** 2025-01-16 15:20:21.507135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:20:21.507158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.507170 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.507182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:20:21.507205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.507217 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.507283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:20:21.507305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.507318 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.507340 | orchestrator | 2025-01-16 15:20:21.507350 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-01-16 15:20:21.507401 | orchestrator | Thursday 16 January 2025 15:17:07 +0000 (0:00:00.741) 0:02:20.739 ****** 2025-01-16 15:20:21.507413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.507440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.507549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.507587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.507606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.507622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.507639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.507676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.507760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.507805 | orchestrator | 2025-01-16 15:20:21.507816 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-01-16 15:20:21.507827 | orchestrator | Thursday 16 January 2025 15:17:09 +0000 (0:00:01.606) 0:02:22.345 ****** 2025-01-16 15:20:21.507838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.507849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.507930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.507954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.507967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.507979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.508009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.508028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508065 | orchestrator | 2025-01-16 15:20:21.508090 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-01-16 15:20:21.508108 | orchestrator | Thursday 16 January 2025 15:17:13 +0000 (0:00:03.976) 0:02:26.321 ****** 2025-01-16 15:20:21.508215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:20:21.508249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508273 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.508285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:20:21.508351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508392 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.508404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-01-16 15:20:21.508427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508510 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.508524 | orchestrator | 2025-01-16 15:20:21.508535 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-01-16 15:20:21.508546 | orchestrator | Thursday 16 January 2025 15:17:13 +0000 (0:00:00.538) 0:02:26.860 ****** 2025-01-16 15:20:21.508567 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.508577 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:20:21.508588 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:20:21.508615 | orchestrator | 2025-01-16 15:20:21.508626 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-01-16 15:20:21.508636 | orchestrator | Thursday 16 January 2025 15:17:14 +0000 (0:00:01.141) 0:02:28.001 ****** 2025-01-16 15:20:21.508647 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.508657 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.508668 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.508678 | orchestrator | 2025-01-16 15:20:21.508688 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-01-16 15:20:21.508698 | orchestrator | Thursday 16 January 2025 15:17:14 +0000 (0:00:00.201) 0:02:28.203 ****** 2025-01-16 15:20:21.508776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.508811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.508828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-01-16 15:20:21.508920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.508944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.508974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.508995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.509005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.509025 | orchestrator | 2025-01-16 15:20:21.509034 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-01-16 15:20:21.509043 | orchestrator | Thursday 16 January 2025 15:17:16 +0000 (0:00:01.433) 0:02:29.636 ****** 2025-01-16 15:20:21.509052 | orchestrator | 2025-01-16 15:20:21.509061 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-01-16 15:20:21.509070 | orchestrator | Thursday 16 January 2025 15:17:16 +0000 (0:00:00.153) 0:02:29.790 ****** 2025-01-16 15:20:21.509078 | orchestrator | 2025-01-16 15:20:21.509087 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-01-16 15:20:21.509095 | orchestrator | Thursday 16 January 2025 15:17:16 +0000 (0:00:00.076) 0:02:29.866 ****** 2025-01-16 15:20:21.509104 | orchestrator | 2025-01-16 15:20:21.509112 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-01-16 15:20:21.509121 | orchestrator | Thursday 16 January 2025 15:17:16 +0000 (0:00:00.150) 0:02:30.017 ****** 2025-01-16 15:20:21.509129 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.509138 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:20:21.509147 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:20:21.509171 | orchestrator | 2025-01-16 15:20:21.509180 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-01-16 15:20:21.509189 | orchestrator | Thursday 16 January 2025 15:17:24 +0000 (0:00:07.362) 0:02:37.379 ****** 2025-01-16 15:20:21.509198 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.509207 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:20:21.509215 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:20:21.509224 | orchestrator | 2025-01-16 15:20:21.509291 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-01-16 15:20:21.509304 | orchestrator | 2025-01-16 15:20:21.509313 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-01-16 15:20:21.509322 | orchestrator | Thursday 16 January 2025 15:17:28 +0000 (0:00:04.038) 0:02:41.417 ****** 2025-01-16 15:20:21.509331 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:20:21.509341 | orchestrator | 2025-01-16 15:20:21.509349 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-01-16 15:20:21.509358 | orchestrator | Thursday 16 January 2025 15:17:29 +0000 (0:00:00.990) 0:02:42.408 ****** 2025-01-16 15:20:21.509367 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.509375 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.509384 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.509393 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.509402 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.509411 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.509419 | orchestrator | 2025-01-16 15:20:21.509428 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-01-16 15:20:21.509437 | orchestrator | Thursday 16 January 2025 15:17:29 +0000 (0:00:00.450) 0:02:42.859 ****** 2025-01-16 15:20:21.509446 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.509479 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.509488 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.509497 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:20:21.509507 | orchestrator | 2025-01-16 15:20:21.509516 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-01-16 15:20:21.509524 | orchestrator | Thursday 16 January 2025 15:17:30 +0000 (0:00:00.783) 0:02:43.642 ****** 2025-01-16 15:20:21.509534 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-01-16 15:20:21.509549 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-01-16 15:20:21.509558 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-01-16 15:20:21.509567 | orchestrator | 2025-01-16 15:20:21.509576 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-01-16 15:20:21.509589 | orchestrator | Thursday 16 January 2025 15:17:31 +0000 (0:00:00.576) 0:02:44.219 ****** 2025-01-16 15:20:21.509598 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-01-16 15:20:21.509607 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-01-16 15:20:21.509616 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-01-16 15:20:21.509625 | orchestrator | 2025-01-16 15:20:21.509633 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-01-16 15:20:21.509642 | orchestrator | Thursday 16 January 2025 15:17:31 +0000 (0:00:00.779) 0:02:44.998 ****** 2025-01-16 15:20:21.509651 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-01-16 15:20:21.509660 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.509668 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-01-16 15:20:21.509677 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.509685 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-01-16 15:20:21.509694 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.509703 | orchestrator | 2025-01-16 15:20:21.509712 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-01-16 15:20:21.509720 | orchestrator | Thursday 16 January 2025 15:17:32 +0000 (0:00:00.430) 0:02:45.428 ****** 2025-01-16 15:20:21.509729 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-01-16 15:20:21.509738 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-01-16 15:20:21.509746 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.509755 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-01-16 15:20:21.509764 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-01-16 15:20:21.509772 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-01-16 15:20:21.509781 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-01-16 15:20:21.509790 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.509798 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-01-16 15:20:21.509807 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-01-16 15:20:21.509815 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.509824 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-01-16 15:20:21.509833 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-01-16 15:20:21.509841 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-01-16 15:20:21.509850 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-01-16 15:20:21.509858 | orchestrator | 2025-01-16 15:20:21.509867 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-01-16 15:20:21.509876 | orchestrator | Thursday 16 January 2025 15:17:33 +0000 (0:00:00.876) 0:02:46.305 ****** 2025-01-16 15:20:21.509884 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.509893 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.509902 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.509910 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.509919 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.509929 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.509938 | orchestrator | 2025-01-16 15:20:21.509948 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-01-16 15:20:21.509957 | orchestrator | Thursday 16 January 2025 15:17:33 +0000 (0:00:00.723) 0:02:47.029 ****** 2025-01-16 15:20:21.510058 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.510073 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.510084 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.510099 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.510108 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.510117 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.510126 | orchestrator | 2025-01-16 15:20:21.510135 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-01-16 15:20:21.510143 | orchestrator | Thursday 16 January 2025 15:17:35 +0000 (0:00:01.321) 0:02:48.351 ****** 2025-01-16 15:20:21.510153 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.510195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.510296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.510339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.510353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.510364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.510487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.510510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.510520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.510529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.510538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.510556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.510621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.510644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.510654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.510673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.510698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.510768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.510801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.510810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.510828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.510842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.510896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.510909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.510927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.510948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.510962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.510972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.511061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.511080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.511104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.511180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511213 | orchestrator | 2025-01-16 15:20:21.511222 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-01-16 15:20:21.511231 | orchestrator | Thursday 16 January 2025 15:17:37 +0000 (0:00:02.062) 0:02:50.414 ****** 2025-01-16 15:20:21.511240 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:20:21.511249 | orchestrator | 2025-01-16 15:20:21.511258 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-01-16 15:20:21.511266 | orchestrator | Thursday 16 January 2025 15:17:38 +0000 (0:00:00.915) 0:02:51.329 ****** 2025-01-16 15:20:21.511286 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.511711 | orchestrator | 2025-01-16 15:20:21.511719 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-01-16 15:20:21.511727 | orchestrator | Thursday 16 January 2025 15:17:40 +0000 (0:00:02.859) 0:02:54.189 ****** 2025-01-16 15:20:21.511736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.511786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.511808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.511817 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.511826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.511840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.511848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.511857 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.511933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.511947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.511956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.511969 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.511978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.511986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.511995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512003 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.512054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.512075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512098 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.512107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.512115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512143 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.512152 | orchestrator | 2025-01-16 15:20:21.512161 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-01-16 15:20:21.512170 | orchestrator | Thursday 16 January 2025 15:17:42 +0000 (0:00:01.157) 0:02:55.347 ****** 2025-01-16 15:20:21.512235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.512249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.512264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.512282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.512297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512307 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.512316 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.512345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.512361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.512370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512379 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.512388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.512397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512423 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.512484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.512501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512518 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.512527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.512535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.512552 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.512560 | orchestrator | 2025-01-16 15:20:21.512569 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-01-16 15:20:21.512577 | orchestrator | Thursday 16 January 2025 15:17:43 +0000 (0:00:01.597) 0:02:56.945 ****** 2025-01-16 15:20:21.512585 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.512593 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.512601 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.512610 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-01-16 15:20:21.512622 | orchestrator | 2025-01-16 15:20:21.512648 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-01-16 15:20:21.512658 | orchestrator | Thursday 16 January 2025 15:17:44 +0000 (0:00:00.699) 0:02:57.644 ****** 2025-01-16 15:20:21.512666 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-01-16 15:20:21.512674 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-01-16 15:20:21.512682 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-01-16 15:20:21.512690 | orchestrator | 2025-01-16 15:20:21.512698 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-01-16 15:20:21.512706 | orchestrator | Thursday 16 January 2025 15:17:44 +0000 (0:00:00.490) 0:02:58.134 ****** 2025-01-16 15:20:21.512714 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-01-16 15:20:21.512722 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-01-16 15:20:21.512730 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-01-16 15:20:21.512738 | orchestrator | 2025-01-16 15:20:21.512746 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-01-16 15:20:21.512754 | orchestrator | Thursday 16 January 2025 15:17:45 +0000 (0:00:00.481) 0:02:58.615 ****** 2025-01-16 15:20:21.512762 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:20:21.512770 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:20:21.512778 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:20:21.512786 | orchestrator | 2025-01-16 15:20:21.512794 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-01-16 15:20:21.512802 | orchestrator | Thursday 16 January 2025 15:17:45 +0000 (0:00:00.449) 0:02:59.065 ****** 2025-01-16 15:20:21.512810 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:20:21.512818 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:20:21.512826 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:20:21.512834 | orchestrator | 2025-01-16 15:20:21.512842 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-01-16 15:20:21.512849 | orchestrator | Thursday 16 January 2025 15:17:46 +0000 (0:00:00.285) 0:02:59.350 ****** 2025-01-16 15:20:21.512858 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-01-16 15:20:21.512866 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-01-16 15:20:21.512876 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-01-16 15:20:21.512885 | orchestrator | 2025-01-16 15:20:21.512894 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-01-16 15:20:21.512902 | orchestrator | Thursday 16 January 2025 15:17:46 +0000 (0:00:00.805) 0:03:00.155 ****** 2025-01-16 15:20:21.512912 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-01-16 15:20:21.512921 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-01-16 15:20:21.512930 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-01-16 15:20:21.512938 | orchestrator | 2025-01-16 15:20:21.512947 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-01-16 15:20:21.512956 | orchestrator | Thursday 16 January 2025 15:17:47 +0000 (0:00:00.823) 0:03:00.979 ****** 2025-01-16 15:20:21.512966 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-01-16 15:20:21.512975 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-01-16 15:20:21.512983 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-01-16 15:20:21.512992 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-01-16 15:20:21.513001 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-01-16 15:20:21.513009 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-01-16 15:20:21.513018 | orchestrator | 2025-01-16 15:20:21.513027 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-01-16 15:20:21.513036 | orchestrator | Thursday 16 January 2025 15:17:51 +0000 (0:00:03.388) 0:03:04.368 ****** 2025-01-16 15:20:21.513044 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.513053 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.513062 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.513071 | orchestrator | 2025-01-16 15:20:21.513096 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-01-16 15:20:21.513105 | orchestrator | Thursday 16 January 2025 15:17:51 +0000 (0:00:00.281) 0:03:04.650 ****** 2025-01-16 15:20:21.513114 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.513122 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.513131 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.513140 | orchestrator | 2025-01-16 15:20:21.513149 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-01-16 15:20:21.513157 | orchestrator | Thursday 16 January 2025 15:17:51 +0000 (0:00:00.253) 0:03:04.903 ****** 2025-01-16 15:20:21.513165 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.513173 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.513181 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.513189 | orchestrator | 2025-01-16 15:20:21.513197 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-01-16 15:20:21.513205 | orchestrator | Thursday 16 January 2025 15:17:52 +0000 (0:00:00.890) 0:03:05.794 ****** 2025-01-16 15:20:21.513213 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-01-16 15:20:21.513226 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-01-16 15:20:21.513234 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-01-16 15:20:21.513242 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-01-16 15:20:21.513251 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-01-16 15:20:21.513277 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-01-16 15:20:21.513287 | orchestrator | 2025-01-16 15:20:21.513295 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-01-16 15:20:21.513303 | orchestrator | Thursday 16 January 2025 15:17:54 +0000 (0:00:02.176) 0:03:07.971 ****** 2025-01-16 15:20:21.513311 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-01-16 15:20:21.513319 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-01-16 15:20:21.513328 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-01-16 15:20:21.513336 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-01-16 15:20:21.513344 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.513352 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-01-16 15:20:21.513360 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.513368 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-01-16 15:20:21.513376 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.513384 | orchestrator | 2025-01-16 15:20:21.513392 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-01-16 15:20:21.513401 | orchestrator | Thursday 16 January 2025 15:17:56 +0000 (0:00:02.096) 0:03:10.067 ****** 2025-01-16 15:20:21.513409 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.513417 | orchestrator | 2025-01-16 15:20:21.513425 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-01-16 15:20:21.513433 | orchestrator | Thursday 16 January 2025 15:17:56 +0000 (0:00:00.067) 0:03:10.135 ****** 2025-01-16 15:20:21.513440 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.513448 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.513474 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.513483 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.513491 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.513499 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.513507 | orchestrator | 2025-01-16 15:20:21.513520 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-01-16 15:20:21.513528 | orchestrator | Thursday 16 January 2025 15:17:57 +0000 (0:00:00.554) 0:03:10.690 ****** 2025-01-16 15:20:21.513536 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-01-16 15:20:21.513544 | orchestrator | 2025-01-16 15:20:21.513558 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-01-16 15:20:21.513566 | orchestrator | Thursday 16 January 2025 15:17:57 +0000 (0:00:00.230) 0:03:10.920 ****** 2025-01-16 15:20:21.513574 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.513586 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.513594 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.513602 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.513610 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.513618 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.513626 | orchestrator | 2025-01-16 15:20:21.513634 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-01-16 15:20:21.513642 | orchestrator | Thursday 16 January 2025 15:17:58 +0000 (0:00:00.446) 0:03:11.367 ****** 2025-01-16 15:20:21.513659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.513669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.513700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.513709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.513730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.513739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.513748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.513757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.513784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.513798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.513806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.513815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.513832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.513841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.513867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.513877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.513889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.513901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.513910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.513918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.513934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.513943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.513969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.513983 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.513992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.514041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514059 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.514132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514428 | orchestrator | 2025-01-16 15:20:21.514437 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-01-16 15:20:21.514445 | orchestrator | Thursday 16 January 2025 15:18:00 +0000 (0:00:02.771) 0:03:14.139 ****** 2025-01-16 15:20:21.514472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.514487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.514500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.514557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.514618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.514630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.514656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.514712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.514720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.514746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.514794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.514803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.514811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.514820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.514839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.514867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.514951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.514979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.514994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.515026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.515039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.515095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.515110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.515119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.515127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.515136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.515150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.515167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.515196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.515206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.515214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.515222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.515235 | orchestrator | 2025-01-16 15:20:21.515244 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-01-16 15:20:21.515252 | orchestrator | Thursday 16 January 2025 15:18:07 +0000 (0:00:06.324) 0:03:20.464 ****** 2025-01-16 15:20:21.515260 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.515268 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.515276 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.515284 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.515293 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.515301 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.515309 | orchestrator | 2025-01-16 15:20:21.515317 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-01-16 15:20:21.515325 | orchestrator | Thursday 16 January 2025 15:18:08 +0000 (0:00:01.109) 0:03:21.574 ****** 2025-01-16 15:20:21.515333 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-01-16 15:20:21.515342 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-01-16 15:20:21.515349 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-01-16 15:20:21.515357 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-01-16 15:20:21.515366 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.515374 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-01-16 15:20:21.515382 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.515390 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-01-16 15:20:21.515398 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.515406 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-01-16 15:20:21.515414 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-01-16 15:20:21.515422 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-01-16 15:20:21.515430 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-01-16 15:20:21.515507 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-01-16 15:20:21.515521 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-01-16 15:20:21.515529 | orchestrator | 2025-01-16 15:20:21.515538 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-01-16 15:20:21.515547 | orchestrator | Thursday 16 January 2025 15:18:11 +0000 (0:00:03.480) 0:03:25.054 ****** 2025-01-16 15:20:21.515555 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.515563 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.515572 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.515580 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.515589 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.515597 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.515605 | orchestrator | 2025-01-16 15:20:21.515614 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-01-16 15:20:21.515622 | orchestrator | Thursday 16 January 2025 15:18:12 +0000 (0:00:00.529) 0:03:25.584 ****** 2025-01-16 15:20:21.515631 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-01-16 15:20:21.515645 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-01-16 15:20:21.515653 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-01-16 15:20:21.515662 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-01-16 15:20:21.515670 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-01-16 15:20:21.515679 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-01-16 15:20:21.515687 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-01-16 15:20:21.515694 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-01-16 15:20:21.515702 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-01-16 15:20:21.515709 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-01-16 15:20:21.515716 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.515723 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-01-16 15:20:21.515731 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.515738 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-01-16 15:20:21.515745 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.515753 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-01-16 15:20:21.515760 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-01-16 15:20:21.515767 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-01-16 15:20:21.515774 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-01-16 15:20:21.515782 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-01-16 15:20:21.515789 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-01-16 15:20:21.515796 | orchestrator | 2025-01-16 15:20:21.515803 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-01-16 15:20:21.515811 | orchestrator | Thursday 16 January 2025 15:18:17 +0000 (0:00:04.779) 0:03:30.364 ****** 2025-01-16 15:20:21.515818 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-01-16 15:20:21.515825 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-01-16 15:20:21.515833 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-01-16 15:20:21.515840 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-01-16 15:20:21.515847 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-01-16 15:20:21.515854 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-01-16 15:20:21.515862 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-01-16 15:20:21.515869 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-01-16 15:20:21.515876 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-01-16 15:20:21.515904 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-01-16 15:20:21.515913 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-01-16 15:20:21.515921 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-01-16 15:20:21.515928 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-01-16 15:20:21.515936 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.515944 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-01-16 15:20:21.515951 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.515959 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-01-16 15:20:21.515967 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.515974 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-01-16 15:20:21.515982 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-01-16 15:20:21.515990 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-01-16 15:20:21.515997 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-01-16 15:20:21.516008 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-01-16 15:20:21.516016 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-01-16 15:20:21.516023 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-01-16 15:20:21.516031 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-01-16 15:20:21.516039 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-01-16 15:20:21.516046 | orchestrator | 2025-01-16 15:20:21.516054 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-01-16 15:20:21.516061 | orchestrator | Thursday 16 January 2025 15:18:23 +0000 (0:00:06.412) 0:03:36.777 ****** 2025-01-16 15:20:21.516069 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.516077 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.516084 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.516092 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.516099 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.516107 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.516115 | orchestrator | 2025-01-16 15:20:21.516123 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-01-16 15:20:21.516130 | orchestrator | Thursday 16 January 2025 15:18:24 +0000 (0:00:00.458) 0:03:37.235 ****** 2025-01-16 15:20:21.516138 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.516145 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.516153 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.516161 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.516168 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.516176 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.516183 | orchestrator | 2025-01-16 15:20:21.516191 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-01-16 15:20:21.516199 | orchestrator | Thursday 16 January 2025 15:18:24 +0000 (0:00:00.562) 0:03:37.798 ****** 2025-01-16 15:20:21.516206 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.516214 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.516221 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.516229 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.516236 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.516244 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.516251 | orchestrator | 2025-01-16 15:20:21.516259 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-01-16 15:20:21.516270 | orchestrator | Thursday 16 January 2025 15:18:26 +0000 (0:00:01.912) 0:03:39.710 ****** 2025-01-16 15:20:21.516279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.516312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.516323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.516331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.516347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.516400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.516437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516445 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.516468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516520 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.516527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.516534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.516548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.516580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516607 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.516614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.516627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.516635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.516663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516689 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.516702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.516714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.516722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.516747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516790 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.516803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.516815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.516832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.516857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.516870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.516916 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.516923 | orchestrator | 2025-01-16 15:20:21.516931 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-01-16 15:20:21.516938 | orchestrator | Thursday 16 January 2025 15:18:27 +0000 (0:00:01.236) 0:03:40.947 ****** 2025-01-16 15:20:21.516945 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-01-16 15:20:21.516952 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-01-16 15:20:21.516959 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.516967 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-01-16 15:20:21.516974 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-01-16 15:20:21.516981 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.516993 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-01-16 15:20:21.517001 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-01-16 15:20:21.517008 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.517016 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-01-16 15:20:21.517023 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-01-16 15:20:21.517030 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.517038 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-01-16 15:20:21.517045 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-01-16 15:20:21.517052 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.517059 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-01-16 15:20:21.517066 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-01-16 15:20:21.517073 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.517080 | orchestrator | 2025-01-16 15:20:21.517087 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-01-16 15:20:21.517094 | orchestrator | Thursday 16 January 2025 15:18:28 +0000 (0:00:00.641) 0:03:41.588 ****** 2025-01-16 15:20:21.517101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.517109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.517127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.517140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.517147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-01-16 15:20:21.517202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-01-16 15:20:21.517209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.517224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.517232 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.517254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.517274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.517285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.517293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.517300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.517308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517322 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.517343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.517351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.517358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.517365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.517373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.517387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.517395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.517402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-01-16 15:20:21.517417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-01-16 15:20:21.517430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.517467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.517483 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.517505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.517528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.517548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-01-16 15:20:21.517564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-01-16 15:20:21.517593 | orchestrator | 2025-01-16 15:20:21.517600 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-01-16 15:20:21.517607 | orchestrator | Thursday 16 January 2025 15:18:30 +0000 (0:00:02.576) 0:03:44.164 ****** 2025-01-16 15:20:21.517614 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.517621 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.517628 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.517635 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.517642 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.517649 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.517656 | orchestrator | 2025-01-16 15:20:21.517663 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-01-16 15:20:21.517671 | orchestrator | Thursday 16 January 2025 15:18:31 +0000 (0:00:00.685) 0:03:44.850 ****** 2025-01-16 15:20:21.517678 | orchestrator | 2025-01-16 15:20:21.517685 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-01-16 15:20:21.517692 | orchestrator | Thursday 16 January 2025 15:18:31 +0000 (0:00:00.082) 0:03:44.932 ****** 2025-01-16 15:20:21.517699 | orchestrator | 2025-01-16 15:20:21.517706 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-01-16 15:20:21.517713 | orchestrator | Thursday 16 January 2025 15:18:31 +0000 (0:00:00.234) 0:03:45.166 ****** 2025-01-16 15:20:21.517720 | orchestrator | 2025-01-16 15:20:21.517727 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-01-16 15:20:21.517734 | orchestrator | Thursday 16 January 2025 15:18:32 +0000 (0:00:00.081) 0:03:45.248 ****** 2025-01-16 15:20:21.517741 | orchestrator | 2025-01-16 15:20:21.517748 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-01-16 15:20:21.517755 | orchestrator | Thursday 16 January 2025 15:18:32 +0000 (0:00:00.173) 0:03:45.422 ****** 2025-01-16 15:20:21.517762 | orchestrator | 2025-01-16 15:20:21.517769 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-01-16 15:20:21.517776 | orchestrator | Thursday 16 January 2025 15:18:32 +0000 (0:00:00.075) 0:03:45.497 ****** 2025-01-16 15:20:21.517783 | orchestrator | 2025-01-16 15:20:21.517790 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-01-16 15:20:21.517800 | orchestrator | Thursday 16 January 2025 15:18:32 +0000 (0:00:00.177) 0:03:45.675 ****** 2025-01-16 15:20:21.517807 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.517818 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:20:21.517825 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:20:21.517833 | orchestrator | 2025-01-16 15:20:21.517839 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-01-16 15:20:21.517846 | orchestrator | Thursday 16 January 2025 15:18:41 +0000 (0:00:09.518) 0:03:55.193 ****** 2025-01-16 15:20:21.517853 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.517860 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:20:21.517867 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:20:21.517874 | orchestrator | 2025-01-16 15:20:21.517881 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-01-16 15:20:21.517888 | orchestrator | Thursday 16 January 2025 15:18:48 +0000 (0:00:06.174) 0:04:01.368 ****** 2025-01-16 15:20:21.517895 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.517902 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.517909 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.517916 | orchestrator | 2025-01-16 15:20:21.517923 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-01-16 15:20:21.517930 | orchestrator | Thursday 16 January 2025 15:19:01 +0000 (0:00:13.141) 0:04:14.510 ****** 2025-01-16 15:20:21.517937 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.517944 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.517951 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.517958 | orchestrator | 2025-01-16 15:20:21.517965 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-01-16 15:20:21.517972 | orchestrator | Thursday 16 January 2025 15:19:19 +0000 (0:00:17.818) 0:04:32.328 ****** 2025-01-16 15:20:21.517979 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.517987 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.517994 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.518001 | orchestrator | 2025-01-16 15:20:21.518008 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-01-16 15:20:21.518040 | orchestrator | Thursday 16 January 2025 15:19:19 +0000 (0:00:00.469) 0:04:32.798 ****** 2025-01-16 15:20:21.518049 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.518056 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.518063 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.518070 | orchestrator | 2025-01-16 15:20:21.518077 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-01-16 15:20:21.518084 | orchestrator | Thursday 16 January 2025 15:19:20 +0000 (0:00:00.566) 0:04:33.364 ****** 2025-01-16 15:20:21.518091 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:20:21.518098 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:20:21.518105 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:20:21.518112 | orchestrator | 2025-01-16 15:20:21.518119 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute-ironic container] ************ 2025-01-16 15:20:21.518126 | orchestrator | Thursday 16 January 2025 15:19:35 +0000 (0:00:15.178) 0:04:48.543 ****** 2025-01-16 15:20:21.518133 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:20:21.518140 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.518151 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:20:21.518158 | orchestrator | 2025-01-16 15:20:21.518165 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-01-16 15:20:21.518172 | orchestrator | Thursday 16 January 2025 15:19:45 +0000 (0:00:09.737) 0:04:58.280 ****** 2025-01-16 15:20:21.518179 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.518235 | orchestrator | 2025-01-16 15:20:21.518244 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-01-16 15:20:21.518251 | orchestrator | Thursday 16 January 2025 15:19:45 +0000 (0:00:00.068) 0:04:58.349 ****** 2025-01-16 15:20:21.518257 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.518264 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.518272 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.518279 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.518293 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.518300 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:20:21.518307 | orchestrator | 2025-01-16 15:20:21.518314 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-01-16 15:20:21.518321 | orchestrator | Thursday 16 January 2025 15:19:50 +0000 (0:00:05.335) 0:05:03.684 ****** 2025-01-16 15:20:21.518328 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.518335 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.518342 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.518349 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.518356 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.518363 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.518370 | orchestrator | 2025-01-16 15:20:21.518377 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-01-16 15:20:21.518384 | orchestrator | Thursday 16 January 2025 15:19:57 +0000 (0:00:06.685) 0:05:10.370 ****** 2025-01-16 15:20:21.518391 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.518398 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.518405 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.518412 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.518423 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.518430 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-01-16 15:20:21.518437 | orchestrator | 2025-01-16 15:20:21.518444 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-01-16 15:20:21.518463 | orchestrator | Thursday 16 January 2025 15:19:59 +0000 (0:00:02.344) 0:05:12.714 ****** 2025-01-16 15:20:21.518471 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:20:21.518478 | orchestrator | 2025-01-16 15:20:21.518485 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-01-16 15:20:21.518492 | orchestrator | Thursday 16 January 2025 15:20:07 +0000 (0:00:07.604) 0:05:20.319 ****** 2025-01-16 15:20:21.518499 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:20:21.518506 | orchestrator | 2025-01-16 15:20:21.518513 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-01-16 15:20:21.518520 | orchestrator | Thursday 16 January 2025 15:20:07 +0000 (0:00:00.731) 0:05:21.051 ****** 2025-01-16 15:20:21.518527 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.518534 | orchestrator | 2025-01-16 15:20:21.518541 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-01-16 15:20:21.518735 | orchestrator | Thursday 16 January 2025 15:20:08 +0000 (0:00:00.727) 0:05:21.778 ****** 2025-01-16 15:20:21.518745 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:20:21.518752 | orchestrator | 2025-01-16 15:20:21.518759 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-01-16 15:20:21.518767 | orchestrator | Thursday 16 January 2025 15:20:15 +0000 (0:00:06.497) 0:05:28.275 ****** 2025-01-16 15:20:21.518774 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:20:21.518781 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:20:21.518788 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:20:21.518795 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:20:21.518802 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:20:21.518809 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:20:21.518816 | orchestrator | 2025-01-16 15:20:21.518823 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-01-16 15:20:21.518830 | orchestrator | 2025-01-16 15:20:21.518837 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-01-16 15:20:21.518844 | orchestrator | Thursday 16 January 2025 15:20:16 +0000 (0:00:01.467) 0:05:29.742 ****** 2025-01-16 15:20:21.518851 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:20:21.518858 | orchestrator | changed: [testbed-node-1] 2025-01-16 15:20:21.518865 | orchestrator | changed: [testbed-node-2] 2025-01-16 15:20:21.518876 | orchestrator | 2025-01-16 15:20:21.518883 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-01-16 15:20:21.518890 | orchestrator | 2025-01-16 15:20:21.518897 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-01-16 15:20:21.518904 | orchestrator | Thursday 16 January 2025 15:20:17 +0000 (0:00:00.722) 0:05:30.465 ****** 2025-01-16 15:20:21.518911 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.518918 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.518925 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.518932 | orchestrator | 2025-01-16 15:20:21.518939 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-01-16 15:20:21.518946 | orchestrator | 2025-01-16 15:20:21.518953 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-01-16 15:20:21.518959 | orchestrator | Thursday 16 January 2025 15:20:17 +0000 (0:00:00.369) 0:05:30.834 ****** 2025-01-16 15:20:21.518966 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-01-16 15:20:21.518973 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-01-16 15:20:21.518980 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-01-16 15:20:21.518987 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-01-16 15:20:21.518994 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-01-16 15:20:21.519001 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-01-16 15:20:21.519013 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:20:21.519020 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-01-16 15:20:21.519027 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-01-16 15:20:21.519034 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-01-16 15:20:21.519041 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-01-16 15:20:21.519049 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-01-16 15:20:21.519056 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-01-16 15:20:21.519063 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:20:21.519070 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-01-16 15:20:21.519077 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-01-16 15:20:21.519084 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-01-16 15:20:21.519091 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-01-16 15:20:21.519097 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-01-16 15:20:21.519104 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-01-16 15:20:21.519111 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:20:21.519118 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-01-16 15:20:21.519125 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-01-16 15:20:21.519132 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-01-16 15:20:21.519139 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-01-16 15:20:21.519149 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-01-16 15:20:21.519156 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-01-16 15:20:21.519163 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.519170 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-01-16 15:20:21.519177 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-01-16 15:20:21.519184 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-01-16 15:20:21.519191 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-01-16 15:20:21.519198 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-01-16 15:20:21.519205 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-01-16 15:20:21.519215 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.519222 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-01-16 15:20:21.519229 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-01-16 15:20:21.519236 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-01-16 15:20:21.519243 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-01-16 15:20:21.519250 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-01-16 15:20:21.519257 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-01-16 15:20:21.519264 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.519271 | orchestrator | 2025-01-16 15:20:21.519278 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-01-16 15:20:21.519285 | orchestrator | 2025-01-16 15:20:21.519292 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-01-16 15:20:21.519299 | orchestrator | Thursday 16 January 2025 15:20:18 +0000 (0:00:00.873) 0:05:31.708 ****** 2025-01-16 15:20:21.519306 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-01-16 15:20:21.519313 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-01-16 15:20:21.519320 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.519326 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-01-16 15:20:21.519333 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-01-16 15:20:21.519340 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.519347 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-01-16 15:20:21.519354 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-01-16 15:20:21.519361 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.519368 | orchestrator | 2025-01-16 15:20:21.519375 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-01-16 15:20:21.519382 | orchestrator | 2025-01-16 15:20:21.519389 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-01-16 15:20:21.519396 | orchestrator | Thursday 16 January 2025 15:20:18 +0000 (0:00:00.461) 0:05:32.169 ****** 2025-01-16 15:20:21.519403 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.519410 | orchestrator | 2025-01-16 15:20:21.519417 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-01-16 15:20:21.519424 | orchestrator | 2025-01-16 15:20:21.519431 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-01-16 15:20:21.519437 | orchestrator | Thursday 16 January 2025 15:20:19 +0000 (0:00:00.481) 0:05:32.651 ****** 2025-01-16 15:20:21.519444 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:20:21.519489 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:20:21.519497 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:20:21.519504 | orchestrator | 2025-01-16 15:20:21.519511 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:20:21.519518 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:20:21.519526 | orchestrator | testbed-node-0 : ok=55  changed=36  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-01-16 15:20:21.519537 | orchestrator | testbed-node-1 : ok=28  changed=20  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-01-16 15:20:24.517113 | orchestrator | testbed-node-2 : ok=28  changed=20  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-01-16 15:20:24.517224 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-01-16 15:20:24.517237 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-01-16 15:20:24.517268 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-01-16 15:20:24.517277 | orchestrator | 2025-01-16 15:20:24.517285 | orchestrator | 2025-01-16 15:20:24.517294 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:20:24.517303 | orchestrator | Thursday 16 January 2025 15:20:19 +0000 (0:00:00.328) 0:05:32.980 ****** 2025-01-16 15:20:24.517310 | orchestrator | =============================================================================== 2025-01-16 15:20:24.517318 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.51s 2025-01-16 15:20:24.517326 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 17.82s 2025-01-16 15:20:24.517333 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 15.18s 2025-01-16 15:20:24.517341 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 14.75s 2025-01-16 15:20:24.517348 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 13.14s 2025-01-16 15:20:24.517356 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 10.26s 2025-01-16 15:20:24.517363 | orchestrator | nova-cell : Restart nova-compute-ironic container ----------------------- 9.74s 2025-01-16 15:20:24.517375 | orchestrator | nova : Running Nova API bootstrap container ----------------------------- 9.61s 2025-01-16 15:20:24.517382 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 9.52s 2025-01-16 15:20:24.517390 | orchestrator | nova-cell : Create cell ------------------------------------------------- 7.88s 2025-01-16 15:20:24.517398 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 7.78s 2025-01-16 15:20:24.517406 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 7.60s 2025-01-16 15:20:24.517413 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 7.37s 2025-01-16 15:20:24.517421 | orchestrator | nova : Restart nova-scheduler container --------------------------------- 7.36s 2025-01-16 15:20:24.517428 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 6.69s 2025-01-16 15:20:24.517436 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 6.50s 2025-01-16 15:20:24.517443 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.41s 2025-01-16 15:20:24.517487 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 6.32s 2025-01-16 15:20:24.517497 | orchestrator | nova-cell : Restart nova-novncproxy container --------------------------- 6.17s 2025-01-16 15:20:24.517504 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves ---- 5.34s 2025-01-16 15:20:24.517512 | orchestrator | 2025-01-16 15:20:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:24.517537 | orchestrator | 2025-01-16 15:20:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:27.535894 | orchestrator | 2025-01-16 15:20:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:30.554008 | orchestrator | 2025-01-16 15:20:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:33.572558 | orchestrator | 2025-01-16 15:20:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:36.589175 | orchestrator | 2025-01-16 15:20:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:39.605846 | orchestrator | 2025-01-16 15:20:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:42.622353 | orchestrator | 2025-01-16 15:20:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:45.639911 | orchestrator | 2025-01-16 15:20:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:48.657189 | orchestrator | 2025-01-16 15:20:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:51.673258 | orchestrator | 2025-01-16 15:20:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:54.691433 | orchestrator | 2025-01-16 15:20:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:20:57.709215 | orchestrator | 2025-01-16 15:20:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:21:00.726327 | orchestrator | 2025-01-16 15:21:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:21:03.748167 | orchestrator | 2025-01-16 15:21:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:21:06.765805 | orchestrator | 2025-01-16 15:21:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:21:09.783397 | orchestrator | 2025-01-16 15:21:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:21:12.800965 | orchestrator | 2025-01-16 15:21:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:21:15.823267 | orchestrator | 2025-01-16 15:21:15 | INFO  | Task 6982139a-6e72-467f-ab43-31a0eb8446ad is in state STARTED 2025-01-16 15:21:15.823635 | orchestrator | 2025-01-16 15:21:15 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:21:18.851738 | orchestrator | 2025-01-16 15:21:18 | INFO  | Task 6982139a-6e72-467f-ab43-31a0eb8446ad is in state STARTED 2025-01-16 15:21:21.872851 | orchestrator | 2025-01-16 15:21:18 | INFO  | Wait 1 second(s) until the next check 2025-01-16 15:21:21.872981 | orchestrator | 2025-01-16 15:21:21 | INFO  | Task 6982139a-6e72-467f-ab43-31a0eb8446ad is in state SUCCESS 2025-01-16 15:21:24.889147 | orchestrator | 2025-01-16 15:21:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:21:24.889282 | orchestrator | 2025-01-16 15:21:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-01-16 15:21:27.906308 | orchestrator | 2025-01-16 15:21:28.020881 | orchestrator | None 2025-01-16 15:21:28.021006 | orchestrator | 2025-01-16 15:21:28.025097 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Jan 16 15:21:28 UTC 2025 2025-01-16 15:21:28.025151 | orchestrator | 2025-01-16 15:21:38.901709 | orchestrator | changed 2025-01-16 15:21:39.266584 | 2025-01-16 15:21:39.266731 | TASK [Bootstrap services] 2025-01-16 15:21:39.937609 | orchestrator | 2025-01-16 15:21:39.940668 | orchestrator | # BOOTSTRAP 2025-01-16 15:21:39.940727 | orchestrator | 2025-01-16 15:21:39.940744 | orchestrator | + set -e 2025-01-16 15:21:39.940792 | orchestrator | + echo 2025-01-16 15:21:39.940810 | orchestrator | + echo '# BOOTSTRAP' 2025-01-16 15:21:39.940825 | orchestrator | + echo 2025-01-16 15:21:39.940848 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-01-16 15:21:39.940887 | orchestrator | + set -e 2025-01-16 15:21:43.885435 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack-services.sh 2025-01-16 15:21:43.885652 | orchestrator | 2025-01-16 15:21:43 | INFO  | Flavor SCS-1V-4 created 2025-01-16 15:21:44.384668 | orchestrator | 2025-01-16 15:21:44 | INFO  | Flavor SCS-2V-8 created 2025-01-16 15:21:44.662662 | orchestrator | 2025-01-16 15:21:44 | INFO  | Flavor SCS-4V-16 created 2025-01-16 15:21:44.726845 | orchestrator | 2025-01-16 15:21:44 | INFO  | Flavor SCS-8V-32 created 2025-01-16 15:21:44.982507 | orchestrator | 2025-01-16 15:21:44 | INFO  | Flavor SCS-1V-2 created 2025-01-16 15:21:45.039581 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-2V-4 created 2025-01-16 15:21:45.090605 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-4V-8 created 2025-01-16 15:21:45.139890 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-8V-16 created 2025-01-16 15:21:45.202760 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-16V-32 created 2025-01-16 15:21:45.269645 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-1V-8 created 2025-01-16 15:21:45.330571 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-2V-16 created 2025-01-16 15:21:45.399019 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-4V-32 created 2025-01-16 15:21:45.456804 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-1L-1 created 2025-01-16 15:21:45.510796 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-2V-4-20s created 2025-01-16 15:21:45.571132 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-4V-16-100s created 2025-01-16 15:21:45.629077 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-1V-4-10 created 2025-01-16 15:21:45.680929 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-2V-8-20 created 2025-01-16 15:21:45.735422 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-4V-16-50 created 2025-01-16 15:21:45.798419 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-8V-32-100 created 2025-01-16 15:21:45.858575 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-1V-2-5 created 2025-01-16 15:21:45.921193 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-2V-4-10 created 2025-01-16 15:21:45.981433 | orchestrator | 2025-01-16 15:21:45 | INFO  | Flavor SCS-4V-8-20 created 2025-01-16 15:21:46.076896 | orchestrator | 2025-01-16 15:21:46 | INFO  | Flavor SCS-8V-16-50 created 2025-01-16 15:21:46.137429 | orchestrator | 2025-01-16 15:21:46 | INFO  | Flavor SCS-16V-32-100 created 2025-01-16 15:21:46.190919 | orchestrator | 2025-01-16 15:21:46 | INFO  | Flavor SCS-1V-8-20 created 2025-01-16 15:21:46.244388 | orchestrator | 2025-01-16 15:21:46 | INFO  | Flavor SCS-2V-16-50 created 2025-01-16 15:21:46.300621 | orchestrator | 2025-01-16 15:21:46 | INFO  | Flavor SCS-4V-32-100 created 2025-01-16 15:21:46.354664 | orchestrator | 2025-01-16 15:21:46 | INFO  | Flavor SCS-1L-1-5 created 2025-01-16 15:21:47.545947 | orchestrator | 2025-01-16 15:21:47 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-01-16 15:21:47.589872 | orchestrator | 2025-01-16 15:21:47 | INFO  | Task c777b865-cdb4-46af-b5c5-d2b01498ec9c (bootstrap-basic) was prepared for execution. 2025-01-16 15:21:50.728798 | orchestrator | 2025-01-16 15:21:47 | INFO  | It takes a moment until task c777b865-cdb4-46af-b5c5-d2b01498ec9c (bootstrap-basic) has been started and output is visible here. 2025-01-16 15:21:50.728983 | orchestrator | 2025-01-16 15:21:52.028250 | orchestrator | PLAY [Prepare masquerading on the manager node] ******************************** 2025-01-16 15:21:52.028340 | orchestrator | 2025-01-16 15:21:52.028350 | orchestrator | TASK [Accept FORWARD on the management interface (incoming)] ******************* 2025-01-16 15:21:52.028358 | orchestrator | Thursday 16 January 2025 15:21:50 +0000 (0:00:00.858) 0:00:00.858 ****** 2025-01-16 15:21:52.028378 | orchestrator | ok: [testbed-manager] 2025-01-16 15:21:52.969873 | orchestrator | 2025-01-16 15:21:52.969992 | orchestrator | TASK [Accept FORWARD on the management interface (outgoing)] ******************* 2025-01-16 15:21:52.970010 | orchestrator | Thursday 16 January 2025 15:21:52 +0000 (0:00:01.297) 0:00:02.156 ****** 2025-01-16 15:21:52.970085 | orchestrator | ok: [testbed-manager] 2025-01-16 15:21:53.937873 | orchestrator | 2025-01-16 15:21:53.938075 | orchestrator | TASK [Masquerade traffic on the management interface] ************************** 2025-01-16 15:21:53.938101 | orchestrator | Thursday 16 January 2025 15:21:52 +0000 (0:00:00.940) 0:00:03.097 ****** 2025-01-16 15:21:53.938133 | orchestrator | ok: [testbed-manager] 2025-01-16 15:21:53.938635 | orchestrator | 2025-01-16 15:21:53.938669 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-01-16 15:21:53.938691 | orchestrator | 2025-01-16 15:21:56.072739 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-01-16 15:21:56.072871 | orchestrator | Thursday 16 January 2025 15:21:53 +0000 (0:00:00.971) 0:00:04.068 ****** 2025-01-16 15:21:56.072911 | orchestrator | ok: [localhost] 2025-01-16 15:22:01.705104 | orchestrator | 2025-01-16 15:22:01.705199 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-01-16 15:22:01.705213 | orchestrator | Thursday 16 January 2025 15:21:56 +0000 (0:00:02.134) 0:00:06.203 ****** 2025-01-16 15:22:01.705240 | orchestrator | ok: [localhost] 2025-01-16 15:22:06.721453 | orchestrator | 2025-01-16 15:22:06.721611 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-01-16 15:22:06.722242 | orchestrator | Thursday 16 January 2025 15:22:01 +0000 (0:00:05.629) 0:00:11.832 ****** 2025-01-16 15:22:06.722281 | orchestrator | changed: [localhost] 2025-01-16 15:22:11.454547 | orchestrator | 2025-01-16 15:22:11.454658 | orchestrator | TASK [Get volume type local] *************************************************** 2025-01-16 15:22:11.454670 | orchestrator | Thursday 16 January 2025 15:22:06 +0000 (0:00:05.018) 0:00:16.850 ****** 2025-01-16 15:22:11.454692 | orchestrator | ok: [localhost] 2025-01-16 15:22:15.344356 | orchestrator | 2025-01-16 15:22:15.344486 | orchestrator | TASK [Create volume type local] ************************************************ 2025-01-16 15:22:15.344582 | orchestrator | Thursday 16 January 2025 15:22:11 +0000 (0:00:04.733) 0:00:21.584 ****** 2025-01-16 15:22:15.344620 | orchestrator | changed: [localhost] 2025-01-16 15:22:20.263108 | orchestrator | 2025-01-16 15:22:20.263203 | orchestrator | TASK [Create public network] *************************************************** 2025-01-16 15:22:20.263215 | orchestrator | Thursday 16 January 2025 15:22:15 +0000 (0:00:03.889) 0:00:25.474 ****** 2025-01-16 15:22:20.263236 | orchestrator | changed: [localhost] 2025-01-16 15:22:24.608167 | orchestrator | 2025-01-16 15:22:24.608292 | orchestrator | TASK [Set public network to default] ******************************************* 2025-01-16 15:22:24.608311 | orchestrator | Thursday 16 January 2025 15:22:20 +0000 (0:00:04.919) 0:00:30.393 ****** 2025-01-16 15:22:24.608339 | orchestrator | changed: [localhost] 2025-01-16 15:22:27.927830 | orchestrator | 2025-01-16 15:22:27.928026 | orchestrator | TASK [Create public subnet] **************************************************** 2025-01-16 15:22:27.928050 | orchestrator | Thursday 16 January 2025 15:22:24 +0000 (0:00:04.345) 0:00:34.738 ****** 2025-01-16 15:22:27.928079 | orchestrator | changed: [localhost] 2025-01-16 15:22:27.928131 | orchestrator | 2025-01-16 15:22:27.928140 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-01-16 15:22:27.928151 | orchestrator | Thursday 16 January 2025 15:22:27 +0000 (0:00:03.319) 0:00:38.058 ****** 2025-01-16 15:22:31.544306 | orchestrator | changed: [localhost] 2025-01-16 15:22:31.545005 | orchestrator | 2025-01-16 15:22:31.545073 | orchestrator | TASK [Create manager role] ***************************************************** 2025-01-16 15:22:31.545127 | orchestrator | Thursday 16 January 2025 15:22:31 +0000 (0:00:03.616) 0:00:41.674 ****** 2025-01-16 15:22:34.538444 | orchestrator | ok: [localhost] 2025-01-16 15:22:34.540179 | orchestrator | 2025-01-16 15:22:34.540242 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:22:34.540261 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:22:34.540279 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:22:34.540332 | orchestrator | 2025-01-16 15:22:34.540362 | orchestrator | 2025-01-16 15:22:34.540388 | orchestrator | 2025-01-16 15:22:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 15:22:34.540411 | orchestrator | 2025-01-16 15:22:34 | INFO  | Please wait and do not abort execution. 2025-01-16 15:22:34.540434 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:22:34.540469 | orchestrator | Thursday 16 January 2025 15:22:34 +0000 (0:00:02.992) 0:00:44.667 ****** 2025-01-16 15:22:34.540616 | orchestrator | =============================================================================== 2025-01-16 15:22:34.540652 | orchestrator | Get volume type LUKS ---------------------------------------------------- 5.63s 2025-01-16 15:22:34.540674 | orchestrator | Create volume type LUKS ------------------------------------------------- 5.02s 2025-01-16 15:22:34.540697 | orchestrator | Create public network --------------------------------------------------- 4.92s 2025-01-16 15:22:34.540719 | orchestrator | Get volume type local --------------------------------------------------- 4.73s 2025-01-16 15:22:34.540742 | orchestrator | Set public network to default ------------------------------------------- 4.35s 2025-01-16 15:22:34.540764 | orchestrator | Create volume type local ------------------------------------------------ 3.89s 2025-01-16 15:22:34.540784 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.62s 2025-01-16 15:22:34.540815 | orchestrator | Create public subnet ---------------------------------------------------- 3.32s 2025-01-16 15:22:34.541187 | orchestrator | Create manager role ----------------------------------------------------- 2.99s 2025-01-16 15:22:34.541247 | orchestrator | Gathering Facts --------------------------------------------------------- 2.13s 2025-01-16 15:22:37.846806 | orchestrator | Accept FORWARD on the management interface (incoming) ------------------- 1.30s 2025-01-16 15:22:37.846901 | orchestrator | Masquerade traffic on the management interface -------------------------- 0.97s 2025-01-16 15:22:37.846913 | orchestrator | Accept FORWARD on the management interface (outgoing) ------------------- 0.94s 2025-01-16 15:22:37.846935 | orchestrator | 2025-01-16 15:22:37 | INFO  | Processing image 'Cirros 0.6.2' 2025-01-16 15:22:38.058179 | orchestrator | 2025-01-16 15:22:38 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-01-16 15:22:39.133867 | orchestrator | 2025-01-16 15:22:38 | INFO  | Importing image Cirros 0.6.2 2025-01-16 15:22:39.133968 | orchestrator | 2025-01-16 15:22:38 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-01-16 15:22:39.133998 | orchestrator | 2025-01-16 15:22:39 | INFO  | Waiting for image to leave queued state... 2025-01-16 15:22:41.155024 | orchestrator | 2025-01-16 15:22:41 | INFO  | Waiting for import to complete... 2025-01-16 15:22:51.345996 | orchestrator | 2025-01-16 15:22:51 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-01-16 15:22:51.429236 | orchestrator | 2025-01-16 15:22:51 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-01-16 15:22:51.553757 | orchestrator | 2025-01-16 15:22:51 | INFO  | Setting internal_version = 0.6.2 2025-01-16 15:22:51.553904 | orchestrator | 2025-01-16 15:22:51 | INFO  | Setting image_original_user = cirros 2025-01-16 15:22:51.553925 | orchestrator | 2025-01-16 15:22:51 | INFO  | Adding tag os:cirros 2025-01-16 15:22:51.553956 | orchestrator | 2025-01-16 15:22:51 | INFO  | Setting property architecture: x86_64 2025-01-16 15:22:51.699637 | orchestrator | 2025-01-16 15:22:51 | INFO  | Setting property hw_disk_bus: scsi 2025-01-16 15:22:51.803877 | orchestrator | 2025-01-16 15:22:51 | INFO  | Setting property hw_rng_model: virtio 2025-01-16 15:22:51.901620 | orchestrator | 2025-01-16 15:22:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-01-16 15:22:52.002894 | orchestrator | 2025-01-16 15:22:51 | INFO  | Setting property hw_watchdog_action: reset 2025-01-16 15:22:52.101613 | orchestrator | 2025-01-16 15:22:52 | INFO  | Setting property hypervisor_type: qemu 2025-01-16 15:22:52.215167 | orchestrator | 2025-01-16 15:22:52 | INFO  | Setting property os_distro: cirros 2025-01-16 15:22:52.323102 | orchestrator | 2025-01-16 15:22:52 | INFO  | Setting property replace_frequency: never 2025-01-16 15:22:52.432421 | orchestrator | 2025-01-16 15:22:52 | INFO  | Setting property uuid_validity: none 2025-01-16 15:22:52.539694 | orchestrator | 2025-01-16 15:22:52 | INFO  | Setting property provided_until: none 2025-01-16 15:22:52.652232 | orchestrator | 2025-01-16 15:22:52 | INFO  | Setting property image_description: Cirros 2025-01-16 15:22:52.775444 | orchestrator | 2025-01-16 15:22:52 | INFO  | Setting property image_name: Cirros 2025-01-16 15:22:52.878966 | orchestrator | 2025-01-16 15:22:52 | INFO  | Setting property internal_version: 0.6.2 2025-01-16 15:22:52.987784 | orchestrator | 2025-01-16 15:22:52 | INFO  | Setting property image_original_user: cirros 2025-01-16 15:22:53.090494 | orchestrator | 2025-01-16 15:22:53 | INFO  | Setting property os_version: 0.6.2 2025-01-16 15:22:53.208640 | orchestrator | 2025-01-16 15:22:53 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-01-16 15:22:53.329901 | orchestrator | 2025-01-16 15:22:53 | INFO  | Setting property image_build_date: 2023-05-30 2025-01-16 15:22:53.587152 | orchestrator | 2025-01-16 15:22:53 | INFO  | Checking status of 'Cirros 0.6.2' 2025-01-16 15:22:53.805448 | orchestrator | 2025-01-16 15:22:53 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-01-16 15:22:53.805565 | orchestrator | 2025-01-16 15:22:53 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-01-16 15:22:53.805593 | orchestrator | 2025-01-16 15:22:53 | INFO  | Processing image 'Cirros 0.6.3' 2025-01-16 15:22:54.025159 | orchestrator | 2025-01-16 15:22:54 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-01-16 15:22:54.711105 | orchestrator | 2025-01-16 15:22:54 | INFO  | Importing image Cirros 0.6.3 2025-01-16 15:22:54.711187 | orchestrator | 2025-01-16 15:22:54 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-01-16 15:22:54.711208 | orchestrator | 2025-01-16 15:22:54 | INFO  | Waiting for image to leave queued state... 2025-01-16 15:22:56.732252 | orchestrator | 2025-01-16 15:22:56 | INFO  | Waiting for import to complete... 2025-01-16 15:23:06.802974 | orchestrator | 2025-01-16 15:23:06 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-01-16 15:23:06.911842 | orchestrator | 2025-01-16 15:23:06 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-01-16 15:23:07.025235 | orchestrator | 2025-01-16 15:23:06 | INFO  | Setting internal_version = 0.6.3 2025-01-16 15:23:07.025429 | orchestrator | 2025-01-16 15:23:06 | INFO  | Setting image_original_user = cirros 2025-01-16 15:23:07.025465 | orchestrator | 2025-01-16 15:23:06 | INFO  | Adding tag os:cirros 2025-01-16 15:23:07.025512 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property architecture: x86_64 2025-01-16 15:23:07.124169 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property hw_disk_bus: scsi 2025-01-16 15:23:07.221413 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property hw_rng_model: virtio 2025-01-16 15:23:07.324240 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-01-16 15:23:07.422195 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property hw_watchdog_action: reset 2025-01-16 15:23:07.520483 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property hypervisor_type: qemu 2025-01-16 15:23:07.623338 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property os_distro: cirros 2025-01-16 15:23:07.722575 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property replace_frequency: never 2025-01-16 15:23:07.820898 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property uuid_validity: none 2025-01-16 15:23:07.928955 | orchestrator | 2025-01-16 15:23:07 | INFO  | Setting property provided_until: none 2025-01-16 15:23:08.046333 | orchestrator | 2025-01-16 15:23:08 | INFO  | Setting property image_description: Cirros 2025-01-16 15:23:08.146412 | orchestrator | 2025-01-16 15:23:08 | INFO  | Setting property image_name: Cirros 2025-01-16 15:23:08.263268 | orchestrator | 2025-01-16 15:23:08 | INFO  | Setting property internal_version: 0.6.3 2025-01-16 15:23:08.368599 | orchestrator | 2025-01-16 15:23:08 | INFO  | Setting property image_original_user: cirros 2025-01-16 15:23:08.472594 | orchestrator | 2025-01-16 15:23:08 | INFO  | Setting property os_version: 0.6.3 2025-01-16 15:23:08.582878 | orchestrator | 2025-01-16 15:23:08 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-01-16 15:23:08.710881 | orchestrator | 2025-01-16 15:23:08 | INFO  | Setting property image_build_date: 2024-09-26 2025-01-16 15:23:08.815976 | orchestrator | 2025-01-16 15:23:08 | INFO  | Checking status of 'Cirros 0.6.3' 2025-01-16 15:23:09.301043 | orchestrator | 2025-01-16 15:23:08 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-01-16 15:23:09.301185 | orchestrator | 2025-01-16 15:23:08 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-01-16 15:23:09.301240 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-01-16 15:23:10.303891 | orchestrator | 2025-01-16 15:23:10 | INFO  | date: 2025-01-16 2025-01-16 15:23:10.339390 | orchestrator | 2025-01-16 15:23:10 | INFO  | image: octavia-amphora-haproxy-2024.1.20250116.qcow2 2025-01-16 15:23:10.339478 | orchestrator | 2025-01-16 15:23:10 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250116.qcow2 2025-01-16 15:23:10.339499 | orchestrator | 2025-01-16 15:23:10 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250116.qcow2.CHECKSUM 2025-01-16 15:23:10.339516 | orchestrator | 2025-01-16 15:23:10 | INFO  | checksum: b5f2c36fda4c5c54aa4cf2c107e3bf828d3ce77addee8bedd6ae05e65a9688a0 2025-01-16 15:23:11.867146 | orchestrator | 2025-01-16 15:23:11 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-01-16' 2025-01-16 15:23:11.877265 | orchestrator | 2025-01-16 15:23:11 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250116.qcow2: 200 2025-01-16 15:23:12.033459 | orchestrator | 2025-01-16 15:23:11 | INFO  | Importing image OpenStack Octavia Amphora 2025-01-16 2025-01-16 15:23:12.033592 | orchestrator | 2025-01-16 15:23:11 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250116.qcow2 2025-01-16 15:23:12.033615 | orchestrator | 2025-01-16 15:23:12 | INFO  | Waiting for image to leave queued state... 2025-01-16 15:23:14.050144 | orchestrator | 2025-01-16 15:23:14 | INFO  | Waiting for import to complete... 2025-01-16 15:23:24.109815 | orchestrator | 2025-01-16 15:23:24 | INFO  | Waiting for import to complete... 2025-01-16 15:23:34.171819 | orchestrator | 2025-01-16 15:23:34 | INFO  | Waiting for import to complete... 2025-01-16 15:23:44.364372 | orchestrator | 2025-01-16 15:23:44 | INFO  | Import of 'OpenStack Octavia Amphora 2025-01-16' successfully completed, reloading images 2025-01-16 15:23:44.511083 | orchestrator | 2025-01-16 15:23:44 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-01-16' 2025-01-16 15:23:44.651744 | orchestrator | 2025-01-16 15:23:44 | INFO  | Setting internal_version = 2025-01-16 2025-01-16 15:23:44.651825 | orchestrator | 2025-01-16 15:23:44 | INFO  | Setting image_original_user = ubuntu 2025-01-16 15:23:44.651832 | orchestrator | 2025-01-16 15:23:44 | INFO  | Adding tag amphora 2025-01-16 15:23:44.651847 | orchestrator | 2025-01-16 15:23:44 | INFO  | Adding tag os:ubuntu 2025-01-16 15:23:44.744182 | orchestrator | 2025-01-16 15:23:44 | INFO  | Setting property architecture: x86_64 2025-01-16 15:23:44.840601 | orchestrator | 2025-01-16 15:23:44 | INFO  | Setting property hw_disk_bus: scsi 2025-01-16 15:23:44.946499 | orchestrator | 2025-01-16 15:23:44 | INFO  | Setting property hw_rng_model: virtio 2025-01-16 15:23:45.041817 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-01-16 15:23:45.137838 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property hw_watchdog_action: reset 2025-01-16 15:23:45.240830 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property hypervisor_type: qemu 2025-01-16 15:23:45.344091 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property os_distro: ubuntu 2025-01-16 15:23:45.440647 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property replace_frequency: quarterly 2025-01-16 15:23:45.541210 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property uuid_validity: last-1 2025-01-16 15:23:45.642315 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property provided_until: none 2025-01-16 15:23:45.744237 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-01-16 15:23:45.844472 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-01-16 15:23:45.945038 | orchestrator | 2025-01-16 15:23:45 | INFO  | Setting property internal_version: 2025-01-16 2025-01-16 15:23:46.061910 | orchestrator | 2025-01-16 15:23:46 | INFO  | Setting property image_original_user: ubuntu 2025-01-16 15:23:46.175123 | orchestrator | 2025-01-16 15:23:46 | INFO  | Setting property os_version: 2025-01-16 2025-01-16 15:23:46.279337 | orchestrator | 2025-01-16 15:23:46 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250116.qcow2 2025-01-16 15:23:46.379262 | orchestrator | 2025-01-16 15:23:46 | INFO  | Setting property image_build_date: 2025-01-16 2025-01-16 15:23:46.481078 | orchestrator | 2025-01-16 15:23:46 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-01-16' 2025-01-16 15:23:46.558374 | orchestrator | 2025-01-16 15:23:46 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-01-16' 2025-01-16 15:23:46.558510 | orchestrator | 2025-01-16 15:23:46 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-01-16 15:23:46.738631 | orchestrator | 2025-01-16 15:23:46 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-01-16 15:23:46.738775 | orchestrator | 2025-01-16 15:23:46 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-01-16 15:23:46.738803 | orchestrator | 2025-01-16 15:23:46 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-01-16 15:23:46.910616 | orchestrator | changed 2025-01-16 15:23:46.936367 | 2025-01-16 15:23:46.936482 | TASK [Run checks] 2025-01-16 15:23:47.584833 | orchestrator | + set -e 2025-01-16 15:23:47.602871 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-01-16 15:23:47.602989 | orchestrator | ++ export INTERACTIVE=false 2025-01-16 15:23:47.603007 | orchestrator | ++ INTERACTIVE=false 2025-01-16 15:23:47.603053 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-01-16 15:23:47.603068 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-01-16 15:23:47.603080 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-01-16 15:23:47.603108 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-01-16 15:23:47.603145 | orchestrator | 2025-01-16 15:23:47.621622 | orchestrator | # CHECK 2025-01-16 15:23:47.621736 | orchestrator | 2025-01-16 15:23:47.621756 | orchestrator | ++ export MANAGER_VERSION=latest 2025-01-16 15:23:47.621773 | orchestrator | ++ MANAGER_VERSION=latest 2025-01-16 15:23:47.621787 | orchestrator | + echo 2025-01-16 15:23:47.621801 | orchestrator | + echo '# CHECK' 2025-01-16 15:23:47.621818 | orchestrator | + echo 2025-01-16 15:23:47.621834 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-01-16 15:23:47.621849 | orchestrator | ++ semver latest 5.0.0 2025-01-16 15:23:47.621882 | orchestrator | 2025-01-16 15:23:48.886706 | orchestrator | ## Containers @ testbed-manager 2025-01-16 15:23:48.886872 | orchestrator | 2025-01-16 15:23:48.886893 | orchestrator | + [[ -1 -eq -1 ]] 2025-01-16 15:23:48.886910 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-01-16 15:23:48.886924 | orchestrator | + echo 2025-01-16 15:23:48.886942 | orchestrator | + echo '## Containers @ testbed-manager' 2025-01-16 15:23:48.886957 | orchestrator | + echo 2025-01-16 15:23:48.886972 | orchestrator | + osism container testbed-manager ps 2025-01-16 15:23:48.887035 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-01-16 15:23:48.887071 | orchestrator | e1911a08c1d0 nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_blackbox_exporter 2025-01-16 15:23:48.887105 | orchestrator | 7cd9168e67bd nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_alertmanager 2025-01-16 15:23:48.887136 | orchestrator | bb37aa9e9d23 nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_cadvisor 2025-01-16 15:23:48.887165 | orchestrator | 6b0a72b20b9b nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_node_exporter 2025-01-16 15:23:48.887192 | orchestrator | 37eb8dfe4b6c nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_server 2025-01-16 15:23:48.887225 | orchestrator | 8e20e410d865 quay.io/osism/cephclient:quincy "/usr/bin/dumb-init …" 13 minutes ago Up 12 minutes cephclient 2025-01-16 15:23:48.887260 | orchestrator | a55efd662916 nexus.testbed.osism.xyz:8193/kolla/cron:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes cron 2025-01-16 15:23:48.887275 | orchestrator | 63162d5d4cf3 nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes kolla_toolbox 2025-01-16 15:23:48.887289 | orchestrator | 96d04b0d6b54 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 23 minutes ago Up 23 minutes (healthy) 80/tcp phpmyadmin 2025-01-16 15:23:48.887339 | orchestrator | 21fcb0beb949 nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes fluentd 2025-01-16 15:23:48.887355 | orchestrator | 65aeb8bdd451 quay.io/osism/openstackclient:2024.1 "/usr/bin/dumb-init …" 24 minutes ago Up 23 minutes openstackclient 2025-01-16 15:23:48.887369 | orchestrator | 54c3b31308a0 quay.io/osism/homer:v24.12.1 "/bin/sh /entrypoint…" 24 minutes ago Up 24 minutes (healthy) 8080/tcp homer 2025-01-16 15:23:48.887391 | orchestrator | 8234314b5847 quay.io/osism/osism-ansible:latest "/entrypoint.sh osis…" 29 minutes ago Up 29 minutes (healthy) osism-ansible 2025-01-16 15:23:48.887405 | orchestrator | 06177db1c0fd ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 38 minutes ago Up 37 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-01-16 15:23:48.887420 | orchestrator | ab6dafedcae0 quay.io/osism/nexus:3.76.0 "/opt/sonatype/nexus…" 39 minutes ago Up 39 minutes (healthy) 8081/tcp, 192.168.16.5:8191-8199->8191-8199/tcp nexus 2025-01-16 15:23:48.887450 | orchestrator | c5c6dbcddecc quay.io/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" 42 minutes ago Up 42 minutes (healthy) kolla-ansible 2025-01-16 15:23:48.887465 | orchestrator | 8a89f5ac7bb4 quay.io/osism/ceph-ansible:quincy "/entrypoint.sh osis…" 42 minutes ago Up 42 minutes (healthy) ceph-ansible 2025-01-16 15:23:48.887480 | orchestrator | b887f72dbba8 quay.io/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 42 minutes ago Up 42 minutes (healthy) osism-kubernetes 2025-01-16 15:23:48.887494 | orchestrator | d2ae3ecd3b5a quay.io/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 42 minutes ago Up 42 minutes (healthy) 8000/tcp manager-ara-server-1 2025-01-16 15:23:48.887508 | orchestrator | e91cb7690b94 quay.io/osism/osism-netbox:latest "/usr/bin/tini -- os…" 42 minutes ago Up 42 minutes (healthy) manager-netbox-1 2025-01-16 15:23:48.887548 | orchestrator | d359671aef88 quay.io/osism/osism:latest "/usr/bin/tini -- os…" 42 minutes ago Up 42 minutes (healthy) manager-flower-1 2025-01-16 15:23:48.887566 | orchestrator | 8f145e459fd1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" 42 minutes ago Up 42 minutes (healthy) manager-listener-1 2025-01-16 15:23:48.887580 | orchestrator | 1630ab0f10fe quay.io/osism/osism:latest "/usr/bin/tini -- sl…" 42 minutes ago Up 42 minutes (healthy) osismclient 2025-01-16 15:23:48.887610 | orchestrator | b3c988039db8 quay.io/osism/osism:latest "/usr/bin/tini -- os…" 42 minutes ago Up 42 minutes (healthy) manager-watchdog-1 2025-01-16 15:23:48.887633 | orchestrator | eaa528d396d1 quay.io/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 42 minutes ago Up 42 minutes (healthy) manager-inventory_reconciler-1 2025-01-16 15:23:48.887647 | orchestrator | 0f296e9bb08a quay.io/osism/osism:latest "/usr/bin/tini -- os…" 42 minutes ago Up 42 minutes (healthy) manager-openstack-1 2025-01-16 15:23:48.887661 | orchestrator | e375b99e3314 quay.io/osism/osism:latest "/usr/bin/tini -- os…" 42 minutes ago Up 42 minutes (healthy) manager-conductor-1 2025-01-16 15:23:48.887675 | orchestrator | 0715854b3a4e quay.io/osism/osism:latest "/usr/bin/tini -- os…" 42 minutes ago Up 42 minutes (healthy) manager-beat-1 2025-01-16 15:23:48.887690 | orchestrator | 299ff3c7d2f3 quay.io/osism/osism:latest "/usr/bin/tini -- os…" 42 minutes ago Up 42 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-01-16 15:23:48.887704 | orchestrator | 81ca34a2160b redis:7.4.2-alpine "docker-entrypoint.s…" 42 minutes ago Up 42 minutes (healthy) 6379/tcp manager-redis-1 2025-01-16 15:23:48.887722 | orchestrator | 934cab19cbce mariadb:11.6.2 "docker-entrypoint.s…" 42 minutes ago Up 42 minutes (healthy) 3306/tcp manager-mariadb-1 2025-01-16 15:23:48.887744 | orchestrator | 0020b322fc33 quay.io/osism/netbox:v4.1.10 "/opt/netbox/venv/bi…" 46 minutes ago Up 43 minutes (healthy) netbox-netbox-worker-1 2025-01-16 15:23:48.887932 | orchestrator | 1dc1bf3948aa quay.io/osism/netbox:v4.1.10 "/usr/bin/tini -- /o…" 46 minutes ago Up 46 minutes (healthy) netbox-netbox-1 2025-01-16 15:23:49.004022 | orchestrator | 7c589ba25f87 postgres:16.6-alpine "docker-entrypoint.s…" 46 minutes ago Up 46 minutes (healthy) 5432/tcp netbox-postgres-1 2025-01-16 15:23:49.004220 | orchestrator | d43ef710a56c redis:7.4.2-alpine "docker-entrypoint.s…" 46 minutes ago Up 46 minutes (healthy) 6379/tcp netbox-redis-1 2025-01-16 15:23:49.004323 | orchestrator | 260a624d10a7 traefik:v3.3.1 "/entrypoint.sh trae…" 47 minutes ago Up 47 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-01-16 15:23:49.004388 | orchestrator | 2025-01-16 15:23:50.210001 | orchestrator | ## Images @ testbed-manager 2025-01-16 15:23:50.210272 | orchestrator | 2025-01-16 15:23:50.210307 | orchestrator | + echo 2025-01-16 15:23:50.210336 | orchestrator | + echo '## Images @ testbed-manager' 2025-01-16 15:23:50.210362 | orchestrator | + echo 2025-01-16 15:23:50.210387 | orchestrator | + osism container testbed-manager images 2025-01-16 15:23:50.210486 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-01-16 15:23:50.320204 | orchestrator | quay.io/osism/osism-ansible latest 8575b2cf5dd9 36 minutes ago 926MB 2025-01-16 15:23:50.320339 | orchestrator | quay.io/osism/inventory-reconciler latest 91dabd6b71fb About an hour ago 269MB 2025-01-16 15:23:50.320348 | orchestrator | quay.io/osism/homer v24.12.1 26b0962e1e17 12 hours ago 11MB 2025-01-16 15:23:50.320353 | orchestrator | quay.io/osism/openstackclient 2024.1 fd5ab9df9f4b 12 hours ago 246MB 2025-01-16 15:23:50.320361 | orchestrator | quay.io/osism/cephclient quincy 46e58aa34394 12 hours ago 446MB 2025-01-16 15:23:50.320367 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/fluentd 2024.1 83b42e4e1493 14 hours ago 520MB 2025-01-16 15:23:50.320372 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cron 2024.1 d9b0ee9a23a3 14 hours ago 249MB 2025-01-16 15:23:50.320377 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox 2024.1 61c56d89dc29 14 hours ago 625MB 2025-01-16 15:23:50.320382 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter 2024.1 c72a15e50b5f 14 hours ago 288MB 2025-01-16 15:23:50.320404 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor 2024.1 a2a1c10f5be8 14 hours ago 343MB 2025-01-16 15:23:50.320410 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-alertmanager 2024.1 36115560ed97 14 hours ago 383MB 2025-01-16 15:23:50.320415 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-v2-server 2024.1 3019e984305c 14 hours ago 750MB 2025-01-16 15:23:50.320420 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-blackbox-exporter 2024.1 6f5c02b71cec 14 hours ago 290MB 2025-01-16 15:23:50.320425 | orchestrator | quay.io/osism/osism-ansible 65a2af3feb71 15 hours ago 926MB 2025-01-16 15:23:50.320430 | orchestrator | quay.io/osism/osism-netbox latest 492d40fc0296 15 hours ago 557MB 2025-01-16 15:23:50.320435 | orchestrator | quay.io/osism/osism-kubernetes latest 80978359563a 15 hours ago 1.04GB 2025-01-16 15:23:50.320440 | orchestrator | quay.io/osism/osism latest fd637cbad12f 15 hours ago 531MB 2025-01-16 15:23:50.320444 | orchestrator | quay.io/osism/kolla-ansible 2024.1 ddc59d57f83e 15 hours ago 573MB 2025-01-16 15:23:50.320450 | orchestrator | quay.io/osism/ceph-ansible quincy c81f85d36579 15 hours ago 495MB 2025-01-16 15:23:50.320455 | orchestrator | phpmyadmin/phpmyadmin 5.2 9253f5a600a8 44 hours ago 560MB 2025-01-16 15:23:50.320460 | orchestrator | quay.io/osism/nexus 3.76.0 accf70451637 7 days ago 640MB 2025-01-16 15:23:50.320465 | orchestrator | traefik v3.3.1 d227d9044add 9 days ago 190MB 2025-01-16 15:23:50.320470 | orchestrator | redis 7.4.2-alpine ee33180a8437 9 days ago 41.4MB 2025-01-16 15:23:50.320475 | orchestrator | quay.io/osism/netbox v4.1.10 3d731b2d642c 3 weeks ago 761MB 2025-01-16 15:23:50.320480 | orchestrator | hashicorp/vault 1.18.3 b9e1daed179f 4 weeks ago 486MB 2025-01-16 15:23:50.320484 | orchestrator | postgres 16.6-alpine 81a348707bc6 5 weeks ago 275MB 2025-01-16 15:23:50.320514 | orchestrator | mariadb 11.6.2 6722945a6940 7 weeks ago 407MB 2025-01-16 15:23:50.320519 | orchestrator | quay.io/osism/ara-server 1.7.2 bb44122eb176 4 months ago 300MB 2025-01-16 15:23:50.320545 | orchestrator | ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 7 months ago 146MB 2025-01-16 15:23:50.320569 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-01-16 15:23:50.337842 | orchestrator | ++ semver latest 5.0.0 2025-01-16 15:23:50.338060 | orchestrator | 2025-01-16 15:23:51.600491 | orchestrator | ## Containers @ testbed-node-0 2025-01-16 15:23:51.600659 | orchestrator | 2025-01-16 15:23:51.600676 | orchestrator | + [[ -1 -eq -1 ]] 2025-01-16 15:23:51.600689 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-01-16 15:23:51.600703 | orchestrator | + echo 2025-01-16 15:23:51.600714 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-01-16 15:23:51.600726 | orchestrator | + echo 2025-01-16 15:23:51.600736 | orchestrator | + osism container testbed-node-0 ps 2025-01-16 15:23:51.600770 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-01-16 15:23:51.600783 | orchestrator | c9fcd45e1c41 nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) nova_compute_ironic 2025-01-16 15:23:51.600795 | orchestrator | b80afd5cc9ed nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_novncproxy 2025-01-16 15:23:51.600806 | orchestrator | 3b25a73db517 nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_conductor 2025-01-16 15:23:51.600816 | orchestrator | dcd1ae3ce103 nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-01-16 15:23:51.600827 | orchestrator | ea5ba4aa3c01 nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_api 2025-01-16 15:23:51.600838 | orchestrator | dacc9beadf04 nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_scheduler 2025-01-16 15:23:51.600848 | orchestrator | 34922dc711dd nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) glance_api 2025-01-16 15:23:51.600859 | orchestrator | 41c0e0d9befd nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_elasticsearch_exporter 2025-01-16 15:23:51.600870 | orchestrator | 7dd3196d5b43 nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) cinder_scheduler 2025-01-16 15:23:51.600881 | orchestrator | a1ca304e8fb4 nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) cinder_api 2025-01-16 15:23:51.600891 | orchestrator | bc2a4bae3b30 nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_cadvisor 2025-01-16 15:23:51.600906 | orchestrator | c5c64222ec5d nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_memcached_exporter 2025-01-16 15:23:51.600923 | orchestrator | ef218cd7acbb nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_mysqld_exporter 2025-01-16 15:23:51.600977 | orchestrator | f133e1e21e91 nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_node_exporter 2025-01-16 15:23:51.600997 | orchestrator | d13c36e28346 nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) ironic_neutron_agent 2025-01-16 15:23:51.601035 | orchestrator | b2dd8acaaea1 nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) ironic_http 2025-01-16 15:23:51.601052 | orchestrator | fad5df9482fc nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-01-16 15:23:51.601071 | orchestrator | cbf1f3541cea nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes ironic_tftp 2025-01-16 15:23:51.601090 | orchestrator | 9aec03b75e41 nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) ironic_inspector 2025-01-16 15:23:51.601116 | orchestrator | c7ceca34e811 nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-01-16 15:23:51.601152 | orchestrator | 743a0974a14f nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) ironic_api 2025-01-16 15:23:51.601165 | orchestrator | 98634af80618 nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) ironic_conductor 2025-01-16 15:23:51.601175 | orchestrator | 899263d4af4b nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-01-16 15:23:51.601186 | orchestrator | c067afa467e1 nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) placement_api 2025-01-16 15:23:51.601196 | orchestrator | a11d7c11e084 nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-01-16 15:23:51.601207 | orchestrator | 00a4095cc76a nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-01-16 15:23:51.601217 | orchestrator | ba0d6696f3b5 nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-01-16 15:23:51.601227 | orchestrator | 7bc63942114d nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-01-16 15:23:51.601238 | orchestrator | 555b4510df41 nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-01-16 15:23:51.601248 | orchestrator | b5d1877514d5 nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-01-16 15:23:51.601259 | orchestrator | c06eb5d87930 nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-01-16 15:23:51.601270 | orchestrator | 544a7b5cfd62 nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-01-16 15:23:51.601319 | orchestrator | 0460adad9278 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 12 minutes ago Up 12 minutes ceph-mgr-testbed-node-0 2025-01-16 15:23:51.601330 | orchestrator | f22103d3c8e6 nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-01-16 15:23:51.601340 | orchestrator | d12b8e66a98f nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) keystone 2025-01-16 15:23:51.601351 | orchestrator | 2f3aead3c0a9 nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) keystone_fernet 2025-01-16 15:23:51.601361 | orchestrator | 25df2fc31fea nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) keystone_ssh 2025-01-16 15:23:51.601371 | orchestrator | e7d10eb8dbea nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) horizon 2025-01-16 15:23:51.601381 | orchestrator | 1e8bd4231072 nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 15 minutes ago Up 15 minutes (healthy) mariadb 2025-01-16 15:23:51.601396 | orchestrator | c92ce4666e47 nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes mariadb_clustercheck 2025-01-16 15:23:51.601406 | orchestrator | 3a5ee4713542 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 17 minutes ago Up 17 minutes ceph-crash-testbed-node-0 2025-01-16 15:23:51.601417 | orchestrator | 8163b97ddf36 nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) opensearch_dashboards 2025-01-16 15:23:51.601433 | orchestrator | 5459cd9d55df nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) opensearch 2025-01-16 15:23:51.698361 | orchestrator | 5f4530536050 nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes keepalived 2025-01-16 15:23:51.698542 | orchestrator | 8db81fed5d36 nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) haproxy 2025-01-16 15:23:51.698558 | orchestrator | 897afba76acc nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1 "dumb-init --single-…" 20 minutes ago Up 20 minutes ovn_northd 2025-01-16 15:23:51.698568 | orchestrator | fcf22d559515 nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes ovn_sb_db 2025-01-16 15:23:51.698579 | orchestrator | e0e18dd958df nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes ovn_nb_db 2025-01-16 15:23:51.698590 | orchestrator | 439665be0bb2 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 21 minutes ago Up 21 minutes ceph-mon-testbed-node-0 2025-01-16 15:23:51.698600 | orchestrator | 4bc2735ecfcb nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes ovn_controller 2025-01-16 15:23:51.698610 | orchestrator | 58c3e34e5f46 nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) rabbitmq 2025-01-16 15:23:51.698650 | orchestrator | bc7466e5ac51 nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) openvswitch_vswitchd 2025-01-16 15:23:51.698660 | orchestrator | 0d9c939f3480 nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) openvswitch_db 2025-01-16 15:23:51.698670 | orchestrator | fa19ac951c47 nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) redis_sentinel 2025-01-16 15:23:51.698679 | orchestrator | 136afd969645 nexus.testbed.osism.xyz:8193/kolla/redis:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) redis 2025-01-16 15:23:51.698689 | orchestrator | 3fb51d76fe62 nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) memcached 2025-01-16 15:23:51.698698 | orchestrator | b3d6ca903ec1 nexus.testbed.osism.xyz:8193/kolla/cron:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes cron 2025-01-16 15:23:51.698708 | orchestrator | e73c454f85ec nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes kolla_toolbox 2025-01-16 15:23:51.698718 | orchestrator | ce1c15bc1da2 nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes fluentd 2025-01-16 15:23:51.698750 | orchestrator | 2025-01-16 15:23:52.915475 | orchestrator | ## Images @ testbed-node-0 2025-01-16 15:23:52.915623 | orchestrator | 2025-01-16 15:23:52.915632 | orchestrator | + echo 2025-01-16 15:23:52.915639 | orchestrator | + echo '## Images @ testbed-node-0' 2025-01-16 15:23:52.915645 | orchestrator | + echo 2025-01-16 15:23:52.915650 | orchestrator | + osism container testbed-node-0 images 2025-01-16 15:23:52.915672 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-01-16 15:23:52.915678 | orchestrator | nexus.testbed.osism.xyz:8193/osism/ceph-daemon quincy c80ca6218de5 12 hours ago 1.38GB 2025-01-16 15:23:52.915684 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/memcached 2024.1 1e3fd485072a 14 hours ago 250MB 2025-01-16 15:23:52.915689 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/fluentd 2024.1 83b42e4e1493 14 hours ago 520MB 2025-01-16 15:23:52.915694 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/rabbitmq 2024.1 88ba544ea286 14 hours ago 306MB 2025-01-16 15:23:52.915709 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/haproxy 2024.1 3af9f690d9c3 14 hours ago 256MB 2025-01-16 15:23:52.915714 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cron 2024.1 d9b0ee9a23a3 14 hours ago 249MB 2025-01-16 15:23:52.915719 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox 2024.1 61c56d89dc29 14 hours ago 625MB 2025-01-16 15:23:52.915724 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/grafana 2024.1 d01c6ff6ee75 14 hours ago 765MB 2025-01-16 15:23:52.915729 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch 2024.1 808a2ae97bd1 14 hours ago 1.46GB 2025-01-16 15:23:52.915734 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards 2024.1 7b8c748ecb0c 14 hours ago 1.42GB 2025-01-16 15:23:52.915739 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keepalived 2024.1 a003a62d11ad 14 hours ago 260MB 2025-01-16 15:23:52.915743 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck 2024.1 e5de9d2199c7 14 hours ago 282MB 2025-01-16 15:23:52.915767 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-server 2024.1 e7ff740a2913 14 hours ago 435MB 2025-01-16 15:23:52.915772 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis 2024.1 389b947a514c 14 hours ago 254MB 2025-01-16 15:23:52.915777 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis-sentinel 2024.1 b09f0dc670ec 14 hours ago 254MB 2025-01-16 15:23:52.915782 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/horizon 2024.1 8dd14fab8171 14 hours ago 1.05GB 2025-01-16 15:23:52.915787 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-inspector 2024.1 6562e7942267 14 hours ago 921MB 2025-01-16 15:23:52.915792 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server 2024.1 9b2d84057922 14 hours ago 265MB 2025-01-16 15:23:52.915796 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd 2024.1 5287d721c9f5 14 hours ago 265MB 2025-01-16 15:23:52.915801 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter 2024.1 c72a15e50b5f 14 hours ago 288MB 2025-01-16 15:23:52.915810 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor 2024.1 a2a1c10f5be8 14 hours ago 343MB 2025-01-16 15:23:52.915815 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter 2024.1 97fcad667c96 14 hours ago 280MB 2025-01-16 15:23:52.915819 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter 2024.1 eb9c66fe479b 14 hours ago 274MB 2025-01-16 15:23:52.915824 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter 2024.1 f40628f6d951 14 hours ago 278MB 2025-01-16 15:23:52.915829 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/heat-api 2024.1 e6fd6b77e82b 14 hours ago 962MB 2025-01-16 15:23:52.915834 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn 2024.1 ccc1792a27bd 14 hours ago 962MB 2025-01-16 15:23:52.915838 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/heat-engine 2024.1 bf21303d46c2 14 hours ago 963MB 2025-01-16 15:23:52.915843 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/aodh-api 2024.1 06a75826cb8d 14 hours ago 881MB 2025-01-16 15:23:52.915848 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/aodh-notifier 2024.1 75e3c79b5467 14 hours ago 881MB 2025-01-16 15:23:52.915854 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator 2024.1 81258b5dbc10 14 hours ago 881MB 2025-01-16 15:23:52.915860 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/aodh-listener 2024.1 acd522eb1ea3 14 hours ago 881MB 2025-01-16 15:23:52.915865 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-mdns 2024.1 2f8bd137f319 14 hours ago 891MB 2025-01-16 15:23:52.915871 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-producer 2024.1 434d00a66d6f 14 hours ago 891MB 2025-01-16 15:23:52.915884 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-worker 2024.1 dbb74cc89d18 14 hours ago 895MB 2025-01-16 15:23:52.915889 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9 2024.1 061d74fd6687 14 hours ago 895MB 2025-01-16 15:23:52.915894 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-central 2024.1 a0accfae2f26 14 hours ago 890MB 2025-01-16 15:23:52.915899 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-api 2024.1 a8eeecae657f 14 hours ago 891MB 2025-01-16 15:23:52.915904 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/glance-api 2024.1 2061d34fb941 14 hours ago 984MB 2025-01-16 15:23:52.915909 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-api 2024.1 14a4a6985399 14 hours ago 1.01GB 2025-01-16 15:23:52.915913 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-conductor 2024.1 465a7b1c1772 14 hours ago 1.12GB 2025-01-16 15:23:52.915918 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent 2024.1 8c1ef9df006c 14 hours ago 949MB 2025-01-16 15:23:52.915929 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping 2024.1 078f7e2df55a 14 hours ago 929MB 2025-01-16 15:23:52.915934 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-api 2024.1 8671deb9cdfc 14 hours ago 949MB 2025-01-16 15:23:52.915938 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager 2024.1 c990e4c58dae 14 hours ago 929MB 2025-01-16 15:23:52.915943 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/octavia-worker 2024.1 0cd0be48c5eb 14 hours ago 929MB 2025-01-16 15:23:52.915948 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-pxe 2024.1 4ef0d0d39ed8 14 hours ago 1.02GB 2025-01-16 15:23:52.915953 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-api 2024.1 5a980b658498 14 hours ago 962MB 2025-01-16 15:23:52.915958 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-conductor 2024.1 5a6e5182ad89 14 hours ago 1.21GB 2025-01-16 15:23:52.915963 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ceilometer-central 2024.1 24d1f1e33dee 14 hours ago 884MB 2025-01-16 15:23:52.915967 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ceilometer-notification 2024.1 7aee727f3e03 14 hours ago 884MB 2025-01-16 15:23:52.915972 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-scheduler 2024.1 a355b88dfb97 14 hours ago 1.1GB 2025-01-16 15:23:52.915977 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic 2024.1 d75489ab7724 14 hours ago 1.11GB 2025-01-16 15:23:52.915988 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-api 2024.1 a0e96575cd21 14 hours ago 1.1GB 2025-01-16 15:23:52.915993 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-conductor 2024.1 a07c67cb23e0 14 hours ago 1.1GB 2025-01-16 15:23:52.915998 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy 2024.1 02d67bf0933c 14 hours ago 1.2GB 2025-01-16 15:23:52.916004 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/placement-api 2024.1 1dec80563835 14 hours ago 883MB 2025-01-16 15:23:52.916009 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/skyline-console 2024.1 a2a437baad7b 14 hours ago 964MB 2025-01-16 15:23:52.916015 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/skyline-apiserver 2024.1 a41b4a0d497f 14 hours ago 943MB 2025-01-16 15:23:52.916020 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent 2024.1 a0476fb93475 14 hours ago 1.04GB 2025-01-16 15:23:52.916026 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/neutron-server 2024.1 5578e4f4a4ac 14 hours ago 1.05GB 2025-01-16 15:23:52.916031 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-fernet 2024.1 bb1d55198002 14 hours ago 933MB 2025-01-16 15:23:52.916036 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-ssh 2024.1 9d0d24cf30ca 14 hours ago 936MB 2025-01-16 15:23:52.916042 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone 2024.1 f082d781f803 14 hours ago 957MB 2025-01-16 15:23:52.916047 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener 2024.1 9068c65208a8 14 hours ago 898MB 2025-01-16 15:23:52.916052 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-worker 2024.1 362ff8978481 14 hours ago 898MB 2025-01-16 15:23:52.916058 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-api 2024.1 cf5141a2c1c8 14 hours ago 897MB 2025-01-16 15:23:52.916063 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-api 2024.1 aa797e1a2abe 14 hours ago 1.28GB 2025-01-16 15:23:52.916068 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler 2024.1 f905f413a6dd 14 hours ago 1.28GB 2025-01-16 15:23:52.916077 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server 2024.1 20e8cd9889fc 14 hours ago 776MB 2025-01-16 15:23:53.015829 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-northd 2024.1 29062ab3bf9d 14 hours ago 777MB 2025-01-16 15:23:53.015952 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-controller 2024.1 aaad10f43ffe 14 hours ago 777MB 2025-01-16 15:23:53.015970 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server 2024.1 32ae0f654064 14 hours ago 776MB 2025-01-16 15:23:53.015997 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-01-16 15:23:53.034180 | orchestrator | ++ semver latest 5.0.0 2025-01-16 15:23:53.034319 | orchestrator | 2025-01-16 15:23:54.288316 | orchestrator | ## Containers @ testbed-node-1 2025-01-16 15:23:54.288474 | orchestrator | 2025-01-16 15:23:54.288493 | orchestrator | + [[ -1 -eq -1 ]] 2025-01-16 15:23:54.288509 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-01-16 15:23:54.288628 | orchestrator | + echo 2025-01-16 15:23:54.288650 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-01-16 15:23:54.288666 | orchestrator | + echo 2025-01-16 15:23:54.288681 | orchestrator | + osism container testbed-node-1 ps 2025-01-16 15:23:54.288720 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-01-16 15:23:54.288737 | orchestrator | 4ff69ea340d1 nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) nova_compute_ironic 2025-01-16 15:23:54.288753 | orchestrator | d46a575b2742 nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_novncproxy 2025-01-16 15:23:54.288768 | orchestrator | 198b72bf3be6 nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_conductor 2025-01-16 15:23:54.288782 | orchestrator | 7c5c6c5c543a nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes grafana 2025-01-16 15:23:54.288799 | orchestrator | 3e67a15c39d1 nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_api 2025-01-16 15:23:54.288830 | orchestrator | 5a4d16922801 nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_scheduler 2025-01-16 15:23:54.288846 | orchestrator | 0bdcf628e62a nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) glance_api 2025-01-16 15:23:54.288861 | orchestrator | 0b379791ca0f nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_elasticsearch_exporter 2025-01-16 15:23:54.288878 | orchestrator | 98e5489d5c16 nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) cinder_scheduler 2025-01-16 15:23:54.288894 | orchestrator | c3e89c3c799e nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) cinder_api 2025-01-16 15:23:54.288909 | orchestrator | 06fa89137edc nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_cadvisor 2025-01-16 15:23:54.288924 | orchestrator | faec783b800c nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_memcached_exporter 2025-01-16 15:23:54.288940 | orchestrator | 53535ffcc362 nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_mysqld_exporter 2025-01-16 15:23:54.288986 | orchestrator | e2f757bc0946 nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_node_exporter 2025-01-16 15:23:54.289002 | orchestrator | 48d21874571e nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) ironic_neutron_agent 2025-01-16 15:23:54.289017 | orchestrator | 348e982bd9f8 nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) ironic_http 2025-01-16 15:23:54.289032 | orchestrator | 68d5c2156d79 nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-01-16 15:23:54.289048 | orchestrator | 378a9ab44bfb nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes ironic_tftp 2025-01-16 15:23:54.289063 | orchestrator | 71450a7609ac nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) ironic_inspector 2025-01-16 15:23:54.289079 | orchestrator | fef0ef9de519 nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-01-16 15:23:54.289109 | orchestrator | bf145794f6f5 nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) ironic_api 2025-01-16 15:23:54.289129 | orchestrator | 8d5cd08bebd4 nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) ironic_conductor 2025-01-16 15:23:54.289156 | orchestrator | 8ef213e8f7ad nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-01-16 15:23:54.289182 | orchestrator | 6f1554621a0e nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) placement_api 2025-01-16 15:23:54.289207 | orchestrator | 8aa8e8621190 nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-01-16 15:23:54.289232 | orchestrator | 8757a71bcc62 nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-01-16 15:23:54.289256 | orchestrator | 80d655a78162 nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-01-16 15:23:54.289280 | orchestrator | b826dadd3c22 nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-01-16 15:23:54.289307 | orchestrator | 49a8222e8dd6 nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-01-16 15:23:54.289330 | orchestrator | 020365877e65 nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-01-16 15:23:54.289353 | orchestrator | d1c4162b9383 nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-01-16 15:23:54.289378 | orchestrator | 21756ecaa68b nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-01-16 15:23:54.289418 | orchestrator | 93f55f30cd2b nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 12 minutes ago Up 12 minutes ceph-mgr-testbed-node-1 2025-01-16 15:23:54.289452 | orchestrator | 4e33a691e3d8 nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-01-16 15:23:54.289477 | orchestrator | f9931ca39bed nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) keystone 2025-01-16 15:23:54.289502 | orchestrator | 352962982c4a nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) keystone_fernet 2025-01-16 15:23:54.289519 | orchestrator | b7ccfe9c94d8 nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) keystone_ssh 2025-01-16 15:23:54.289576 | orchestrator | 1ac14185ec62 nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) horizon 2025-01-16 15:23:54.289591 | orchestrator | 315fa6c2bc3c nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 16 minutes ago Up 16 minutes (healthy) mariadb 2025-01-16 15:23:54.289605 | orchestrator | fdfc99e28b66 nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes mariadb_clustercheck 2025-01-16 15:23:54.289631 | orchestrator | 8200a0358f69 nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) opensearch_dashboards 2025-01-16 15:23:54.289645 | orchestrator | fdaef101e516 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 17 minutes ago Up 17 minutes ceph-crash-testbed-node-1 2025-01-16 15:23:54.289670 | orchestrator | a4842a525b25 nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) opensearch 2025-01-16 15:23:54.393215 | orchestrator | 2beb92305168 nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes keepalived 2025-01-16 15:23:54.393370 | orchestrator | 0c252541a231 nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) haproxy 2025-01-16 15:23:54.393389 | orchestrator | 7940c15ee8e4 nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1 "dumb-init --single-…" 21 minutes ago Up 20 minutes ovn_northd 2025-01-16 15:23:54.393405 | orchestrator | 75435725830a nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 21 minutes ago Up 20 minutes ovn_sb_db 2025-01-16 15:23:54.393421 | orchestrator | acce9f423956 nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 21 minutes ago Up 20 minutes ovn_nb_db 2025-01-16 15:23:54.393437 | orchestrator | 72057891a3d7 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 21 minutes ago Up 21 minutes ceph-mon-testbed-node-1 2025-01-16 15:23:54.393453 | orchestrator | deb45b0a3888 nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) rabbitmq 2025-01-16 15:23:54.393467 | orchestrator | 213b014a49b6 nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes ovn_controller 2025-01-16 15:23:54.393566 | orchestrator | eceab8791452 nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) openvswitch_vswitchd 2025-01-16 15:23:54.393583 | orchestrator | ead7c8ced69e nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) openvswitch_db 2025-01-16 15:23:54.393597 | orchestrator | d2ff4f40d380 nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) redis_sentinel 2025-01-16 15:23:54.393612 | orchestrator | 7fcdd39e5a55 nexus.testbed.osism.xyz:8193/kolla/redis:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) redis 2025-01-16 15:23:54.393626 | orchestrator | 318efdd18e54 nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) memcached 2025-01-16 15:23:54.393655 | orchestrator | 73d844c22f65 nexus.testbed.osism.xyz:8193/kolla/cron:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes cron 2025-01-16 15:23:54.393671 | orchestrator | 953e1ae3d788 nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes kolla_toolbox 2025-01-16 15:23:54.393686 | orchestrator | 9da4b67ff7fe nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes fluentd 2025-01-16 15:23:54.393719 | orchestrator | 2025-01-16 15:23:55.612992 | orchestrator | ## Images @ testbed-node-1 2025-01-16 15:23:55.613175 | orchestrator | 2025-01-16 15:23:55.613902 | orchestrator | + echo 2025-01-16 15:23:55.613922 | orchestrator | + echo '## Images @ testbed-node-1' 2025-01-16 15:23:55.613935 | orchestrator | + echo 2025-01-16 15:23:55.613946 | orchestrator | + osism container testbed-node-1 images 2025-01-16 15:23:55.613978 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-01-16 15:23:55.613990 | orchestrator | nexus.testbed.osism.xyz:8193/osism/ceph-daemon quincy c80ca6218de5 12 hours ago 1.38GB 2025-01-16 15:23:55.614001 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/memcached 2024.1 1e3fd485072a 14 hours ago 250MB 2025-01-16 15:23:55.614012 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/fluentd 2024.1 83b42e4e1493 14 hours ago 520MB 2025-01-16 15:23:55.614081 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/rabbitmq 2024.1 88ba544ea286 14 hours ago 306MB 2025-01-16 15:23:55.614091 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/haproxy 2024.1 3af9f690d9c3 14 hours ago 256MB 2025-01-16 15:23:55.614102 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cron 2024.1 d9b0ee9a23a3 14 hours ago 249MB 2025-01-16 15:23:55.614112 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox 2024.1 61c56d89dc29 14 hours ago 625MB 2025-01-16 15:23:55.614123 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/grafana 2024.1 d01c6ff6ee75 14 hours ago 765MB 2025-01-16 15:23:55.614146 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch 2024.1 808a2ae97bd1 14 hours ago 1.46GB 2025-01-16 15:23:55.614157 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards 2024.1 7b8c748ecb0c 14 hours ago 1.42GB 2025-01-16 15:23:55.614167 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keepalived 2024.1 a003a62d11ad 14 hours ago 260MB 2025-01-16 15:23:55.614178 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck 2024.1 e5de9d2199c7 14 hours ago 282MB 2025-01-16 15:23:55.614233 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-server 2024.1 e7ff740a2913 14 hours ago 435MB 2025-01-16 15:23:55.614244 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis 2024.1 389b947a514c 14 hours ago 254MB 2025-01-16 15:23:55.614254 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis-sentinel 2024.1 b09f0dc670ec 14 hours ago 254MB 2025-01-16 15:23:55.614265 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/horizon 2024.1 8dd14fab8171 14 hours ago 1.05GB 2025-01-16 15:23:55.614275 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-inspector 2024.1 6562e7942267 14 hours ago 921MB 2025-01-16 15:23:55.614285 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server 2024.1 9b2d84057922 14 hours ago 265MB 2025-01-16 15:23:55.614295 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd 2024.1 5287d721c9f5 14 hours ago 265MB 2025-01-16 15:23:55.614305 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter 2024.1 c72a15e50b5f 14 hours ago 288MB 2025-01-16 15:23:55.614316 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor 2024.1 a2a1c10f5be8 14 hours ago 343MB 2025-01-16 15:23:55.614329 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter 2024.1 97fcad667c96 14 hours ago 280MB 2025-01-16 15:23:55.614339 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter 2024.1 eb9c66fe479b 14 hours ago 274MB 2025-01-16 15:23:55.614349 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter 2024.1 f40628f6d951 14 hours ago 278MB 2025-01-16 15:23:55.614359 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-mdns 2024.1 2f8bd137f319 14 hours ago 891MB 2025-01-16 15:23:55.614370 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-producer 2024.1 434d00a66d6f 14 hours ago 891MB 2025-01-16 15:23:55.614380 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-worker 2024.1 dbb74cc89d18 14 hours ago 895MB 2025-01-16 15:23:55.614390 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9 2024.1 061d74fd6687 14 hours ago 895MB 2025-01-16 15:23:55.614400 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-central 2024.1 a0accfae2f26 14 hours ago 890MB 2025-01-16 15:23:55.614410 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-api 2024.1 a8eeecae657f 14 hours ago 891MB 2025-01-16 15:23:55.614420 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/glance-api 2024.1 2061d34fb941 14 hours ago 984MB 2025-01-16 15:23:55.614430 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-api 2024.1 14a4a6985399 14 hours ago 1.01GB 2025-01-16 15:23:55.614452 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-conductor 2024.1 465a7b1c1772 14 hours ago 1.12GB 2025-01-16 15:23:55.614478 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-pxe 2024.1 4ef0d0d39ed8 14 hours ago 1.02GB 2025-01-16 15:23:55.715126 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-api 2024.1 5a980b658498 14 hours ago 962MB 2025-01-16 15:23:55.715322 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-conductor 2024.1 5a6e5182ad89 14 hours ago 1.21GB 2025-01-16 15:23:55.715356 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-scheduler 2024.1 a355b88dfb97 14 hours ago 1.1GB 2025-01-16 15:23:55.715382 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic 2024.1 d75489ab7724 14 hours ago 1.11GB 2025-01-16 15:23:55.715407 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-api 2024.1 a0e96575cd21 14 hours ago 1.1GB 2025-01-16 15:23:55.715432 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-conductor 2024.1 a07c67cb23e0 14 hours ago 1.1GB 2025-01-16 15:23:55.715500 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy 2024.1 02d67bf0933c 14 hours ago 1.2GB 2025-01-16 15:23:55.715551 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/placement-api 2024.1 1dec80563835 14 hours ago 883MB 2025-01-16 15:23:55.715578 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent 2024.1 a0476fb93475 14 hours ago 1.04GB 2025-01-16 15:23:55.715604 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/neutron-server 2024.1 5578e4f4a4ac 14 hours ago 1.05GB 2025-01-16 15:23:55.715643 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-fernet 2024.1 bb1d55198002 14 hours ago 933MB 2025-01-16 15:23:55.715670 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-ssh 2024.1 9d0d24cf30ca 14 hours ago 936MB 2025-01-16 15:23:55.715697 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone 2024.1 f082d781f803 14 hours ago 957MB 2025-01-16 15:23:55.715721 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener 2024.1 9068c65208a8 14 hours ago 898MB 2025-01-16 15:23:55.715748 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-worker 2024.1 362ff8978481 14 hours ago 898MB 2025-01-16 15:23:55.715773 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-api 2024.1 cf5141a2c1c8 14 hours ago 897MB 2025-01-16 15:23:55.715800 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-api 2024.1 aa797e1a2abe 14 hours ago 1.28GB 2025-01-16 15:23:55.715823 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler 2024.1 f905f413a6dd 14 hours ago 1.28GB 2025-01-16 15:23:55.715848 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server 2024.1 20e8cd9889fc 14 hours ago 776MB 2025-01-16 15:23:55.715875 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-northd 2024.1 29062ab3bf9d 14 hours ago 777MB 2025-01-16 15:23:55.715901 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-controller 2024.1 aaad10f43ffe 14 hours ago 777MB 2025-01-16 15:23:55.715926 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server 2024.1 32ae0f654064 14 hours ago 776MB 2025-01-16 15:23:55.715980 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-01-16 15:23:55.733623 | orchestrator | ++ semver latest 5.0.0 2025-01-16 15:23:55.733805 | orchestrator | 2025-01-16 15:23:57.010350 | orchestrator | ## Containers @ testbed-node-2 2025-01-16 15:23:57.010452 | orchestrator | 2025-01-16 15:23:57.010466 | orchestrator | + [[ -1 -eq -1 ]] 2025-01-16 15:23:57.010475 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-01-16 15:23:57.010483 | orchestrator | + echo 2025-01-16 15:23:57.010489 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-01-16 15:23:57.010495 | orchestrator | + echo 2025-01-16 15:23:57.010501 | orchestrator | + osism container testbed-node-2 ps 2025-01-16 15:23:57.010519 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-01-16 15:23:57.010555 | orchestrator | 488a3768d42b nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic:2024.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) nova_compute_ironic 2025-01-16 15:23:57.010565 | orchestrator | 526c699c6f7b nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_novncproxy 2025-01-16 15:23:57.010571 | orchestrator | dc383968110e nexus.testbed.osism.xyz:8193/kolla/nova-conductor:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) nova_conductor 2025-01-16 15:23:57.010576 | orchestrator | dd94b33b09d2 nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes grafana 2025-01-16 15:23:57.010581 | orchestrator | d70c7c97eaa6 nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_api 2025-01-16 15:23:57.010609 | orchestrator | b0be863b58cf nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_scheduler 2025-01-16 15:23:57.010614 | orchestrator | ea0f5d00451c nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1 "dumb-init --single-…" 7 minutes ago Up 6 minutes (healthy) glance_api 2025-01-16 15:23:57.010620 | orchestrator | 57b4dba4c69b nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_elasticsearch_exporter 2025-01-16 15:23:57.010626 | orchestrator | 4c2ce0860a3c nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) cinder_scheduler 2025-01-16 15:23:57.010630 | orchestrator | 66ec12fdc4b6 nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) cinder_api 2025-01-16 15:23:57.010636 | orchestrator | 8379fa0faa87 nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_cadvisor 2025-01-16 15:23:57.010641 | orchestrator | b21876d6ff62 nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_memcached_exporter 2025-01-16 15:23:57.010646 | orchestrator | 1f41a8389e7b nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_mysqld_exporter 2025-01-16 15:23:57.010651 | orchestrator | 5b66feb58c06 nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_node_exporter 2025-01-16 15:23:57.010656 | orchestrator | d8855f34eb54 nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) ironic_neutron_agent 2025-01-16 15:23:57.010661 | orchestrator | e7149abc5c3f nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) ironic_http 2025-01-16 15:23:57.010666 | orchestrator | fdad217f3c6b nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2025-01-16 15:23:57.010671 | orchestrator | 3f4588bd3c9f nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes ironic_tftp 2025-01-16 15:23:57.010676 | orchestrator | 6654135fd9e8 nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) ironic_inspector 2025-01-16 15:23:57.010681 | orchestrator | 508f8c122b8d nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-01-16 15:23:57.010692 | orchestrator | 6336dfaa6d4e nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) ironic_api 2025-01-16 15:23:57.010706 | orchestrator | 32c4fc2a4520 nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) ironic_conductor 2025-01-16 15:23:57.010711 | orchestrator | e6cd0d84fbb8 nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-01-16 15:23:57.010720 | orchestrator | 007695cbff9e nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) placement_api 2025-01-16 15:23:57.010725 | orchestrator | 3d3536caefa1 nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2025-01-16 15:23:57.010730 | orchestrator | bbceac750e0c nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-01-16 15:23:57.010735 | orchestrator | f81325e4712f nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-01-16 15:23:57.010740 | orchestrator | 75147125cafd nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-01-16 15:23:57.010745 | orchestrator | dfcfa3e840f3 nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-01-16 15:23:57.010750 | orchestrator | 7149acd571ba nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2025-01-16 15:23:57.010755 | orchestrator | 6542e61daec2 nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-01-16 15:23:57.010760 | orchestrator | f009049dc8fd nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 12 minutes ago Up 12 minutes ceph-mgr-testbed-node-2 2025-01-16 15:23:57.010765 | orchestrator | fc1f9b253e83 nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-01-16 15:23:57.010769 | orchestrator | 8b8d4cfdd42d nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-01-16 15:23:57.010774 | orchestrator | 2fdeb5438531 nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) keystone 2025-01-16 15:23:57.010779 | orchestrator | 8d5cc2024d2d nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) keystone_fernet 2025-01-16 15:23:57.010784 | orchestrator | 6924681aeb26 nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) keystone_ssh 2025-01-16 15:23:57.010789 | orchestrator | 96da0177227c nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) horizon 2025-01-16 15:23:57.010794 | orchestrator | 71329f42ea5f nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 16 minutes ago Up 16 minutes (healthy) mariadb 2025-01-16 15:23:57.010799 | orchestrator | 9b21ff34cdde nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes mariadb_clustercheck 2025-01-16 15:23:57.010804 | orchestrator | 4a1616fd4a90 nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) opensearch_dashboards 2025-01-16 15:23:57.010814 | orchestrator | ea1cfb5469d0 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 17 minutes ago Up 17 minutes ceph-crash-testbed-node-2 2025-01-16 15:23:57.010826 | orchestrator | 8cd33eec3b07 nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) opensearch 2025-01-16 15:23:57.113373 | orchestrator | c7d44ad93f1f nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes keepalived 2025-01-16 15:23:57.113427 | orchestrator | 0d90898127f9 nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) haproxy 2025-01-16 15:23:57.113442 | orchestrator | d38109367755 nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy "/opt/ceph-container…" 20 minutes ago Up 20 minutes ceph-mon-testbed-node-2 2025-01-16 15:23:57.113456 | orchestrator | 743288277a9b nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1 "dumb-init --single-…" 21 minutes ago Up 20 minutes ovn_northd 2025-01-16 15:23:57.113477 | orchestrator | 5d2e88e03b4b nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 21 minutes ago Up 20 minutes ovn_sb_db 2025-01-16 15:23:57.113490 | orchestrator | 2761262c4925 nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 21 minutes ago Up 20 minutes ovn_nb_db 2025-01-16 15:23:57.113501 | orchestrator | ae2626c38427 nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) rabbitmq 2025-01-16 15:23:57.113512 | orchestrator | c25f891ed9b2 nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes ovn_controller 2025-01-16 15:23:57.113578 | orchestrator | db7a970815ab nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) openvswitch_vswitchd 2025-01-16 15:23:57.113594 | orchestrator | b8484674d170 nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) openvswitch_db 2025-01-16 15:23:57.113607 | orchestrator | 714802fdb68e nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) redis_sentinel 2025-01-16 15:23:57.113619 | orchestrator | 6c983f59404e nexus.testbed.osism.xyz:8193/kolla/redis:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) redis 2025-01-16 15:23:57.113630 | orchestrator | 589fd7a17ff3 nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) memcached 2025-01-16 15:23:57.113642 | orchestrator | ec639e32f6d7 nexus.testbed.osism.xyz:8193/kolla/cron:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes cron 2025-01-16 15:23:57.113655 | orchestrator | d3ec5447ab5c nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes kolla_toolbox 2025-01-16 15:23:57.113666 | orchestrator | 896943bff55b nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes fluentd 2025-01-16 15:23:57.113690 | orchestrator | 2025-01-16 15:23:58.348044 | orchestrator | ## Images @ testbed-node-2 2025-01-16 15:23:58.348141 | orchestrator | 2025-01-16 15:23:58.348152 | orchestrator | + echo 2025-01-16 15:23:58.348160 | orchestrator | + echo '## Images @ testbed-node-2' 2025-01-16 15:23:58.348167 | orchestrator | + echo 2025-01-16 15:23:58.348174 | orchestrator | + osism container testbed-node-2 images 2025-01-16 15:23:58.348206 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-01-16 15:23:58.348243 | orchestrator | nexus.testbed.osism.xyz:8193/osism/ceph-daemon quincy c80ca6218de5 12 hours ago 1.38GB 2025-01-16 15:23:58.348266 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/memcached 2024.1 1e3fd485072a 14 hours ago 250MB 2025-01-16 15:23:58.348278 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/fluentd 2024.1 83b42e4e1493 14 hours ago 520MB 2025-01-16 15:23:58.348288 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/rabbitmq 2024.1 88ba544ea286 14 hours ago 306MB 2025-01-16 15:23:58.348299 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/haproxy 2024.1 3af9f690d9c3 14 hours ago 256MB 2025-01-16 15:23:58.348309 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cron 2024.1 d9b0ee9a23a3 14 hours ago 249MB 2025-01-16 15:23:58.348321 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox 2024.1 61c56d89dc29 14 hours ago 625MB 2025-01-16 15:23:58.348331 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/grafana 2024.1 d01c6ff6ee75 14 hours ago 765MB 2025-01-16 15:23:58.348342 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch 2024.1 808a2ae97bd1 14 hours ago 1.46GB 2025-01-16 15:23:58.348354 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards 2024.1 7b8c748ecb0c 14 hours ago 1.42GB 2025-01-16 15:23:58.348365 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keepalived 2024.1 a003a62d11ad 14 hours ago 260MB 2025-01-16 15:23:58.348375 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-server 2024.1 e7ff740a2913 14 hours ago 435MB 2025-01-16 15:23:58.348386 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck 2024.1 e5de9d2199c7 14 hours ago 282MB 2025-01-16 15:23:58.348397 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis 2024.1 389b947a514c 14 hours ago 254MB 2025-01-16 15:23:58.348409 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/redis-sentinel 2024.1 b09f0dc670ec 14 hours ago 254MB 2025-01-16 15:23:58.348420 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/horizon 2024.1 8dd14fab8171 14 hours ago 1.05GB 2025-01-16 15:23:58.348431 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-inspector 2024.1 6562e7942267 14 hours ago 921MB 2025-01-16 15:23:58.348438 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server 2024.1 9b2d84057922 14 hours ago 265MB 2025-01-16 15:23:58.348445 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter 2024.1 c72a15e50b5f 14 hours ago 288MB 2025-01-16 15:23:58.348451 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd 2024.1 5287d721c9f5 14 hours ago 265MB 2025-01-16 15:23:58.348457 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor 2024.1 a2a1c10f5be8 14 hours ago 343MB 2025-01-16 15:23:58.348463 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-mysqld-exporter 2024.1 97fcad667c96 14 hours ago 280MB 2025-01-16 15:23:58.348470 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-elasticsearch-exporter 2024.1 eb9c66fe479b 14 hours ago 274MB 2025-01-16 15:23:58.348476 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/prometheus-memcached-exporter 2024.1 f40628f6d951 14 hours ago 278MB 2025-01-16 15:23:58.348482 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-mdns 2024.1 2f8bd137f319 14 hours ago 891MB 2025-01-16 15:23:58.348488 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-producer 2024.1 434d00a66d6f 14 hours ago 891MB 2025-01-16 15:23:58.348494 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-worker 2024.1 dbb74cc89d18 14 hours ago 895MB 2025-01-16 15:23:58.348503 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9 2024.1 061d74fd6687 14 hours ago 895MB 2025-01-16 15:23:58.348516 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-central 2024.1 a0accfae2f26 14 hours ago 890MB 2025-01-16 15:23:58.348539 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/designate-api 2024.1 a8eeecae657f 14 hours ago 891MB 2025-01-16 15:23:58.348546 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/glance-api 2024.1 2061d34fb941 14 hours ago 984MB 2025-01-16 15:23:58.348553 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-api 2024.1 14a4a6985399 14 hours ago 1.01GB 2025-01-16 15:23:58.348559 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/magnum-conductor 2024.1 465a7b1c1772 14 hours ago 1.12GB 2025-01-16 15:23:58.348574 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-pxe 2024.1 4ef0d0d39ed8 14 hours ago 1.02GB 2025-01-16 15:23:58.451287 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-api 2024.1 5a980b658498 14 hours ago 962MB 2025-01-16 15:23:58.451403 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-conductor 2024.1 5a6e5182ad89 14 hours ago 1.21GB 2025-01-16 15:23:58.451422 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-scheduler 2024.1 a355b88dfb97 14 hours ago 1.1GB 2025-01-16 15:23:58.451438 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-compute-ironic 2024.1 d75489ab7724 14 hours ago 1.11GB 2025-01-16 15:23:58.451454 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-api 2024.1 a0e96575cd21 14 hours ago 1.1GB 2025-01-16 15:23:58.451469 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-conductor 2024.1 a07c67cb23e0 14 hours ago 1.1GB 2025-01-16 15:23:58.451487 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/nova-novncproxy 2024.1 02d67bf0933c 14 hours ago 1.2GB 2025-01-16 15:23:58.451502 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/placement-api 2024.1 1dec80563835 14 hours ago 883MB 2025-01-16 15:23:58.451516 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent 2024.1 a0476fb93475 14 hours ago 1.04GB 2025-01-16 15:23:58.451602 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/neutron-server 2024.1 5578e4f4a4ac 14 hours ago 1.05GB 2025-01-16 15:23:58.451621 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-fernet 2024.1 bb1d55198002 14 hours ago 933MB 2025-01-16 15:23:58.451646 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone-ssh 2024.1 9d0d24cf30ca 14 hours ago 936MB 2025-01-16 15:23:58.451669 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/keystone 2024.1 f082d781f803 14 hours ago 957MB 2025-01-16 15:23:58.451692 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener 2024.1 9068c65208a8 14 hours ago 898MB 2025-01-16 15:23:58.451714 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-worker 2024.1 362ff8978481 14 hours ago 898MB 2025-01-16 15:23:58.451737 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/barbican-api 2024.1 cf5141a2c1c8 14 hours ago 897MB 2025-01-16 15:23:58.451763 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-api 2024.1 aa797e1a2abe 14 hours ago 1.28GB 2025-01-16 15:23:58.451786 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler 2024.1 f905f413a6dd 14 hours ago 1.28GB 2025-01-16 15:23:58.451811 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server 2024.1 20e8cd9889fc 14 hours ago 776MB 2025-01-16 15:23:58.451851 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-northd 2024.1 29062ab3bf9d 14 hours ago 777MB 2025-01-16 15:23:58.451879 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-controller 2024.1 aaad10f43ffe 14 hours ago 777MB 2025-01-16 15:23:58.451901 | orchestrator | nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server 2024.1 32ae0f654064 14 hours ago 776MB 2025-01-16 15:23:58.451974 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-01-16 15:23:58.454458 | orchestrator | + set -e 2025-01-16 15:23:58.454820 | orchestrator | + source /opt/manager-vars.sh 2025-01-16 15:23:58.454849 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-01-16 15:23:58.458415 | orchestrator | ++ NUMBER_OF_NODES=6 2025-01-16 15:23:58.458444 | orchestrator | ++ export CEPH_VERSION=quincy 2025-01-16 15:23:58.458453 | orchestrator | ++ CEPH_VERSION=quincy 2025-01-16 15:23:58.458461 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-01-16 15:23:58.458471 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-01-16 15:23:58.458479 | orchestrator | ++ export MANAGER_VERSION=latest 2025-01-16 15:23:58.458487 | orchestrator | ++ MANAGER_VERSION=latest 2025-01-16 15:23:58.458495 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-01-16 15:23:58.458501 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-01-16 15:23:58.458506 | orchestrator | ++ export ARA=false 2025-01-16 15:23:58.458511 | orchestrator | ++ ARA=false 2025-01-16 15:23:58.458516 | orchestrator | ++ export TEMPEST=false 2025-01-16 15:23:58.458535 | orchestrator | ++ TEMPEST=false 2025-01-16 15:23:58.458542 | orchestrator | ++ export IS_ZUUL=true 2025-01-16 15:23:58.458547 | orchestrator | ++ IS_ZUUL=true 2025-01-16 15:23:58.458552 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 15:23:58.458558 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 15:23:58.458563 | orchestrator | ++ export EXTERNAL_API=false 2025-01-16 15:23:58.458568 | orchestrator | ++ EXTERNAL_API=false 2025-01-16 15:23:58.458573 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-01-16 15:23:58.458578 | orchestrator | ++ IMAGE_USER=ubuntu 2025-01-16 15:23:58.458583 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-01-16 15:23:58.458599 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-01-16 15:23:58.458604 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-01-16 15:23:58.458609 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-01-16 15:23:58.458614 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-01-16 15:23:58.458619 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-services.sh 2025-01-16 15:23:58.458630 | orchestrator | + set -e 2025-01-16 15:23:58.458789 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-01-16 15:23:58.458799 | orchestrator | ++ export INTERACTIVE=false 2025-01-16 15:23:58.458809 | orchestrator | ++ INTERACTIVE=false 2025-01-16 15:23:58.458814 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-01-16 15:23:58.458819 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-01-16 15:23:58.458824 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-01-16 15:23:58.458832 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-01-16 15:23:58.475072 | orchestrator | 2025-01-16 15:23:58.841246 | orchestrator | # Ceph status 2025-01-16 15:23:58.841354 | orchestrator | 2025-01-16 15:23:58.841367 | orchestrator | ++ export MANAGER_VERSION=latest 2025-01-16 15:23:58.841379 | orchestrator | ++ MANAGER_VERSION=latest 2025-01-16 15:23:58.841389 | orchestrator | + echo 2025-01-16 15:23:58.841399 | orchestrator | + echo '# Ceph status' 2025-01-16 15:23:58.841408 | orchestrator | + echo 2025-01-16 15:23:58.841418 | orchestrator | + ceph -s 2025-01-16 15:23:58.841442 | orchestrator | cluster: 2025-01-16 15:23:58.854878 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-01-16 15:23:58.854915 | orchestrator | health: HEALTH_OK 2025-01-16 15:23:58.854926 | orchestrator | 2025-01-16 15:23:58.854936 | orchestrator | services: 2025-01-16 15:23:58.854946 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 20m) 2025-01-16 15:23:58.854966 | orchestrator | mgr: testbed-node-1(active, since 12m), standbys: testbed-node-0, testbed-node-2 2025-01-16 15:23:58.854977 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-01-16 15:23:58.854986 | orchestrator | osd: 6 osds: 6 up (since 18m), 6 in (since 18m) 2025-01-16 15:23:58.854996 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-01-16 15:23:58.855006 | orchestrator | 2025-01-16 15:23:58.855015 | orchestrator | data: 2025-01-16 15:23:58.855025 | orchestrator | volumes: 1/1 healthy 2025-01-16 15:23:58.855034 | orchestrator | pools: 14 pools, 401 pgs 2025-01-16 15:23:58.855044 | orchestrator | objects: 519 objects, 2.2 GiB 2025-01-16 15:23:58.855053 | orchestrator | usage: 8.4 GiB used, 111 GiB / 120 GiB avail 2025-01-16 15:23:58.855063 | orchestrator | pgs: 401 active+clean 2025-01-16 15:23:58.855072 | orchestrator | 2025-01-16 15:23:58.855091 | orchestrator | 2025-01-16 15:23:59.198763 | orchestrator | # Ceph versions 2025-01-16 15:23:59.198885 | orchestrator | 2025-01-16 15:23:59.198904 | orchestrator | + echo 2025-01-16 15:23:59.198941 | orchestrator | + echo '# Ceph versions' 2025-01-16 15:23:59.198977 | orchestrator | + echo 2025-01-16 15:23:59.198992 | orchestrator | + ceph versions 2025-01-16 15:23:59.199024 | orchestrator | { 2025-01-16 15:23:59.213494 | orchestrator | "mon": { 2025-01-16 15:23:59.213651 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-01-16 15:23:59.213672 | orchestrator | }, 2025-01-16 15:23:59.213689 | orchestrator | "mgr": { 2025-01-16 15:23:59.213706 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-01-16 15:23:59.213722 | orchestrator | }, 2025-01-16 15:23:59.213738 | orchestrator | "osd": { 2025-01-16 15:23:59.213755 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 6 2025-01-16 15:23:59.213772 | orchestrator | }, 2025-01-16 15:23:59.213788 | orchestrator | "mds": { 2025-01-16 15:23:59.213806 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-01-16 15:23:59.213822 | orchestrator | }, 2025-01-16 15:23:59.213838 | orchestrator | "rgw": { 2025-01-16 15:23:59.213855 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-01-16 15:23:59.213893 | orchestrator | }, 2025-01-16 15:23:59.213922 | orchestrator | "overall": { 2025-01-16 15:23:59.213939 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 18 2025-01-16 15:23:59.213955 | orchestrator | } 2025-01-16 15:23:59.213972 | orchestrator | } 2025-01-16 15:23:59.214006 | orchestrator | 2025-01-16 15:23:59.541810 | orchestrator | # Ceph OSD tree 2025-01-16 15:23:59.541953 | orchestrator | 2025-01-16 15:23:59.541980 | orchestrator | + echo 2025-01-16 15:23:59.542002 | orchestrator | + echo '# Ceph OSD tree' 2025-01-16 15:23:59.542100 | orchestrator | + echo 2025-01-16 15:23:59.542126 | orchestrator | + ceph osd df tree 2025-01-16 15:23:59.542174 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-01-16 15:23:59.556554 | orchestrator | -1 0.11691 - 120 GiB 8.4 GiB 6.7 GiB 0 B 1.7 GiB 111 GiB 7.02 1.00 - root default 2025-01-16 15:23:59.556665 | orchestrator | -5 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 596 MiB 37 GiB 7.02 1.00 - host testbed-node-3 2025-01-16 15:23:59.556681 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.2 GiB 0 B 298 MiB 18 GiB 7.69 1.09 197 up osd.0 2025-01-16 15:23:59.556697 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1003 MiB 0 B 298 MiB 19 GiB 6.36 0.91 195 up osd.5 2025-01-16 15:23:59.556711 | orchestrator | -3 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 596 MiB 37 GiB 7.02 1.00 - host testbed-node-4 2025-01-16 15:23:59.556726 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.7 GiB 1.4 GiB 0 B 298 MiB 18 GiB 8.59 1.22 204 up osd.1 2025-01-16 15:23:59.556740 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 819 MiB 0 B 298 MiB 19 GiB 5.46 0.78 186 up osd.4 2025-01-16 15:23:59.556754 | orchestrator | -7 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 596 MiB 37 GiB 7.02 1.00 - host testbed-node-5 2025-01-16 15:23:59.556768 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.7 GiB 1.4 GiB 0 B 298 MiB 18 GiB 8.43 1.20 190 up osd.2 2025-01-16 15:23:59.556782 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 851 MiB 0 B 298 MiB 19 GiB 5.61 0.80 198 up osd.3 2025-01-16 15:23:59.556796 | orchestrator | TOTAL 120 GiB 8.4 GiB 6.7 GiB 0 B 1.7 GiB 111 GiB 7.02 2025-01-16 15:23:59.556810 | orchestrator | MIN/MAX VAR: 0.78/1.22 STDDEV: 1.27 2025-01-16 15:23:59.556841 | orchestrator | 2025-01-16 15:23:59.906589 | orchestrator | # Ceph monitor status 2025-01-16 15:23:59.906687 | orchestrator | 2025-01-16 15:23:59.906698 | orchestrator | + echo 2025-01-16 15:23:59.906707 | orchestrator | + echo '# Ceph monitor status' 2025-01-16 15:23:59.906716 | orchestrator | + echo 2025-01-16 15:23:59.906724 | orchestrator | + ceph mon stat 2025-01-16 15:23:59.906754 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {1}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-01-16 15:23:59.920332 | orchestrator | 2025-01-16 15:24:00.293710 | orchestrator | # Ceph quorum status 2025-01-16 15:24:00.293799 | orchestrator | 2025-01-16 15:24:00.293817 | orchestrator | + echo 2025-01-16 15:24:00.293826 | orchestrator | + echo '# Ceph quorum status' 2025-01-16 15:24:00.293834 | orchestrator | + echo 2025-01-16 15:24:00.293842 | orchestrator | + ceph quorum_status 2025-01-16 15:24:00.293850 | orchestrator | + jq 2025-01-16 15:24:00.293872 | orchestrator | { 2025-01-16 15:24:00.661822 | orchestrator | "election_epoch": 6, 2025-01-16 15:24:00.661953 | orchestrator | "quorum": [ 2025-01-16 15:24:00.662756 | orchestrator | 0, 2025-01-16 15:24:00.662791 | orchestrator | 1, 2025-01-16 15:24:00.662803 | orchestrator | 2 2025-01-16 15:24:00.662813 | orchestrator | ], 2025-01-16 15:24:00.662824 | orchestrator | "quorum_names": [ 2025-01-16 15:24:00.662835 | orchestrator | "testbed-node-0", 2025-01-16 15:24:00.662856 | orchestrator | "testbed-node-1", 2025-01-16 15:24:00.662866 | orchestrator | "testbed-node-2" 2025-01-16 15:24:00.662876 | orchestrator | ], 2025-01-16 15:24:00.662887 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-01-16 15:24:00.662899 | orchestrator | "quorum_age": 1250, 2025-01-16 15:24:00.662909 | orchestrator | "features": { 2025-01-16 15:24:00.662919 | orchestrator | "quorum_con": "4540138320759226367", 2025-01-16 15:24:00.662930 | orchestrator | "quorum_mon": [ 2025-01-16 15:24:00.662940 | orchestrator | "kraken", 2025-01-16 15:24:00.662951 | orchestrator | "luminous", 2025-01-16 15:24:00.662961 | orchestrator | "mimic", 2025-01-16 15:24:00.662971 | orchestrator | "osdmap-prune", 2025-01-16 15:24:00.662985 | orchestrator | "nautilus", 2025-01-16 15:24:00.663003 | orchestrator | "octopus", 2025-01-16 15:24:00.663021 | orchestrator | "pacific", 2025-01-16 15:24:00.663037 | orchestrator | "elector-pinging", 2025-01-16 15:24:00.663053 | orchestrator | "quincy" 2025-01-16 15:24:00.663068 | orchestrator | ] 2025-01-16 15:24:00.663083 | orchestrator | }, 2025-01-16 15:24:00.663099 | orchestrator | "monmap": { 2025-01-16 15:24:00.663114 | orchestrator | "epoch": 1, 2025-01-16 15:24:00.663129 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-01-16 15:24:00.663145 | orchestrator | "modified": "2025-01-16T15:02:34.190231Z", 2025-01-16 15:24:00.663160 | orchestrator | "created": "2025-01-16T15:02:34.190231Z", 2025-01-16 15:24:00.663175 | orchestrator | "min_mon_release": 17, 2025-01-16 15:24:00.663191 | orchestrator | "min_mon_release_name": "quincy", 2025-01-16 15:24:00.663206 | orchestrator | "election_strategy": 1, 2025-01-16 15:24:00.663222 | orchestrator | "disallowed_leaders: ": "", 2025-01-16 15:24:00.663238 | orchestrator | "stretch_mode": false, 2025-01-16 15:24:00.663254 | orchestrator | "tiebreaker_mon": "", 2025-01-16 15:24:00.663270 | orchestrator | "removed_ranks: ": "1", 2025-01-16 15:24:00.663286 | orchestrator | "features": { 2025-01-16 15:24:00.663302 | orchestrator | "persistent": [ 2025-01-16 15:24:00.663319 | orchestrator | "kraken", 2025-01-16 15:24:00.663335 | orchestrator | "luminous", 2025-01-16 15:24:00.663352 | orchestrator | "mimic", 2025-01-16 15:24:00.663370 | orchestrator | "osdmap-prune", 2025-01-16 15:24:00.663386 | orchestrator | "nautilus", 2025-01-16 15:24:00.663404 | orchestrator | "octopus", 2025-01-16 15:24:00.663415 | orchestrator | "pacific", 2025-01-16 15:24:00.663425 | orchestrator | "elector-pinging", 2025-01-16 15:24:00.663435 | orchestrator | "quincy" 2025-01-16 15:24:00.663446 | orchestrator | ], 2025-01-16 15:24:00.663456 | orchestrator | "optional": [] 2025-01-16 15:24:00.663466 | orchestrator | }, 2025-01-16 15:24:00.663477 | orchestrator | "mons": [ 2025-01-16 15:24:00.663490 | orchestrator | { 2025-01-16 15:24:00.663506 | orchestrator | "rank": 0, 2025-01-16 15:24:00.663554 | orchestrator | "name": "testbed-node-0", 2025-01-16 15:24:00.663574 | orchestrator | "public_addrs": { 2025-01-16 15:24:00.663590 | orchestrator | "addrvec": [ 2025-01-16 15:24:00.663605 | orchestrator | { 2025-01-16 15:24:00.663619 | orchestrator | "type": "v2", 2025-01-16 15:24:00.663634 | orchestrator | "addr": "192.168.16.10:3300", 2025-01-16 15:24:00.663649 | orchestrator | "nonce": 0 2025-01-16 15:24:00.663664 | orchestrator | }, 2025-01-16 15:24:00.663681 | orchestrator | { 2025-01-16 15:24:00.663698 | orchestrator | "type": "v1", 2025-01-16 15:24:00.663715 | orchestrator | "addr": "192.168.16.10:6789", 2025-01-16 15:24:00.663730 | orchestrator | "nonce": 0 2025-01-16 15:24:00.663754 | orchestrator | } 2025-01-16 15:24:00.663771 | orchestrator | ] 2025-01-16 15:24:00.663787 | orchestrator | }, 2025-01-16 15:24:00.663831 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-01-16 15:24:00.663849 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-01-16 15:24:00.663867 | orchestrator | "priority": 0, 2025-01-16 15:24:00.663885 | orchestrator | "weight": 0, 2025-01-16 15:24:00.663902 | orchestrator | "crush_location": "{}" 2025-01-16 15:24:00.663919 | orchestrator | }, 2025-01-16 15:24:00.663935 | orchestrator | { 2025-01-16 15:24:00.663953 | orchestrator | "rank": 1, 2025-01-16 15:24:00.663970 | orchestrator | "name": "testbed-node-1", 2025-01-16 15:24:00.663987 | orchestrator | "public_addrs": { 2025-01-16 15:24:00.664004 | orchestrator | "addrvec": [ 2025-01-16 15:24:00.664021 | orchestrator | { 2025-01-16 15:24:00.664038 | orchestrator | "type": "v2", 2025-01-16 15:24:00.664055 | orchestrator | "addr": "192.168.16.11:3300", 2025-01-16 15:24:00.664071 | orchestrator | "nonce": 0 2025-01-16 15:24:00.664087 | orchestrator | }, 2025-01-16 15:24:00.664104 | orchestrator | { 2025-01-16 15:24:00.664123 | orchestrator | "type": "v1", 2025-01-16 15:24:00.664142 | orchestrator | "addr": "192.168.16.11:6789", 2025-01-16 15:24:00.664159 | orchestrator | "nonce": 0 2025-01-16 15:24:00.664177 | orchestrator | } 2025-01-16 15:24:00.664196 | orchestrator | ] 2025-01-16 15:24:00.664215 | orchestrator | }, 2025-01-16 15:24:00.664233 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-01-16 15:24:00.664250 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-01-16 15:24:00.664268 | orchestrator | "priority": 0, 2025-01-16 15:24:00.664284 | orchestrator | "weight": 0, 2025-01-16 15:24:00.664301 | orchestrator | "crush_location": "{}" 2025-01-16 15:24:00.664317 | orchestrator | }, 2025-01-16 15:24:00.664334 | orchestrator | { 2025-01-16 15:24:00.664352 | orchestrator | "rank": 2, 2025-01-16 15:24:00.664370 | orchestrator | "name": "testbed-node-2", 2025-01-16 15:24:00.664388 | orchestrator | "public_addrs": { 2025-01-16 15:24:00.664405 | orchestrator | "addrvec": [ 2025-01-16 15:24:00.664423 | orchestrator | { 2025-01-16 15:24:00.664439 | orchestrator | "type": "v2", 2025-01-16 15:24:00.664456 | orchestrator | "addr": "192.168.16.12:3300", 2025-01-16 15:24:00.664476 | orchestrator | "nonce": 0 2025-01-16 15:24:00.664495 | orchestrator | }, 2025-01-16 15:24:00.664514 | orchestrator | { 2025-01-16 15:24:00.664601 | orchestrator | "type": "v1", 2025-01-16 15:24:00.664618 | orchestrator | "addr": "192.168.16.12:6789", 2025-01-16 15:24:00.664635 | orchestrator | "nonce": 0 2025-01-16 15:24:00.664652 | orchestrator | } 2025-01-16 15:24:00.664669 | orchestrator | ] 2025-01-16 15:24:00.664685 | orchestrator | }, 2025-01-16 15:24:00.664701 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-01-16 15:24:00.664718 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-01-16 15:24:00.664735 | orchestrator | "priority": 0, 2025-01-16 15:24:00.664754 | orchestrator | "weight": 0, 2025-01-16 15:24:00.664772 | orchestrator | "crush_location": "{}" 2025-01-16 15:24:00.664792 | orchestrator | } 2025-01-16 15:24:00.664811 | orchestrator | ] 2025-01-16 15:24:00.664831 | orchestrator | } 2025-01-16 15:24:00.664850 | orchestrator | } 2025-01-16 15:24:00.664869 | orchestrator | 2025-01-16 15:24:00.664888 | orchestrator | # Ceph free space status 2025-01-16 15:24:00.664908 | orchestrator | 2025-01-16 15:24:00.664928 | orchestrator | + echo 2025-01-16 15:24:00.664948 | orchestrator | + echo '# Ceph free space status' 2025-01-16 15:24:00.664967 | orchestrator | + echo 2025-01-16 15:24:00.664988 | orchestrator | + ceph df 2025-01-16 15:24:00.665031 | orchestrator | --- RAW STORAGE --- 2025-01-16 15:24:00.676115 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-01-16 15:24:00.676221 | orchestrator | hdd 120 GiB 111 GiB 8.4 GiB 8.4 GiB 7.02 2025-01-16 15:24:00.676236 | orchestrator | TOTAL 120 GiB 111 GiB 8.4 GiB 8.4 GiB 7.02 2025-01-16 15:24:00.676248 | orchestrator | 2025-01-16 15:24:00.676261 | orchestrator | --- POOLS --- 2025-01-16 15:24:00.676272 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-01-16 15:24:00.676284 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-01-16 15:24:00.676296 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-01-16 15:24:00.676307 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-01-16 15:24:00.676319 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-01-16 15:24:00.676354 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-01-16 15:24:00.676366 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-01-16 15:24:00.676377 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-01-16 15:24:00.676388 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-01-16 15:24:00.676400 | orchestrator | .rgw.root 9 32 3.7 KiB 8 64 KiB 0 52 GiB 2025-01-16 15:24:00.676411 | orchestrator | backups 10 32 19 B 1 12 KiB 0 35 GiB 2025-01-16 15:24:00.676422 | orchestrator | volumes 11 32 19 B 1 12 KiB 0 35 GiB 2025-01-16 15:24:00.676433 | orchestrator | images 12 32 2.2 GiB 298 6.7 GiB 6.04 35 GiB 2025-01-16 15:24:00.676458 | orchestrator | metrics 13 32 19 B 1 12 KiB 0 35 GiB 2025-01-16 15:24:00.676470 | orchestrator | vms 14 32 19 B 1 12 KiB 0 35 GiB 2025-01-16 15:24:00.676496 | orchestrator | ++ semver latest 5.0.0 2025-01-16 15:24:00.693171 | orchestrator | + [[ -1 -eq -1 ]] 2025-01-16 15:24:01.672749 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-01-16 15:24:01.672868 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-01-16 15:24:01.672888 | orchestrator | + osism apply facts 2025-01-16 15:24:01.672923 | orchestrator | 2025-01-16 15:24:01 | INFO  | Task 033c6d8b-34b8-4456-85b3-828a25b58719 (facts) was prepared for execution. 2025-01-16 15:24:05.677429 | orchestrator | 2025-01-16 15:24:01 | INFO  | It takes a moment until task 033c6d8b-34b8-4456-85b3-828a25b58719 (facts) has been started and output is visible here. 2025-01-16 15:24:05.677601 | orchestrator | 2025-01-16 15:24:05.677973 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-01-16 15:24:05.678100 | orchestrator | 2025-01-16 15:24:05.678120 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-01-16 15:24:05.678340 | orchestrator | Thursday 16 January 2025 15:24:05 +0000 (0:00:01.756) 0:00:01.756 ****** 2025-01-16 15:24:06.192397 | orchestrator | ok: [testbed-manager] 2025-01-16 15:24:08.139233 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:08.249365 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:24:08.249455 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:24:08.249465 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:24:08.249472 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:24:08.249479 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:24:08.249485 | orchestrator | 2025-01-16 15:24:08.249493 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-01-16 15:24:08.249500 | orchestrator | Thursday 16 January 2025 15:24:08 +0000 (0:00:02.460) 0:00:04.217 ****** 2025-01-16 15:24:08.249519 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:24:08.306109 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:08.363310 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:24:08.429310 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:24:08.484592 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:24:09.698476 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:24:09.698663 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:24:09.698681 | orchestrator | 2025-01-16 15:24:09.699203 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-01-16 15:24:09.699543 | orchestrator | 2025-01-16 15:24:09.700062 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-01-16 15:24:09.700182 | orchestrator | Thursday 16 January 2025 15:24:09 +0000 (0:00:01.565) 0:00:05.782 ****** 2025-01-16 15:24:13.699364 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:24:13.699721 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:24:13.699749 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:13.699759 | orchestrator | ok: [testbed-manager] 2025-01-16 15:24:13.699768 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:24:13.699847 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:24:13.699861 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:24:13.699971 | orchestrator | 2025-01-16 15:24:13.700164 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-01-16 15:24:13.700189 | orchestrator | 2025-01-16 15:24:13.700344 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-01-16 15:24:13.701850 | orchestrator | Thursday 16 January 2025 15:24:13 +0000 (0:00:03.999) 0:00:09.782 ****** 2025-01-16 15:24:13.817737 | orchestrator | skipping: [testbed-manager] 2025-01-16 15:24:13.876304 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:13.952488 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:24:14.043684 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:24:14.118675 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:24:15.421971 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:24:15.422280 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:24:15.422311 | orchestrator | 2025-01-16 15:24:15.422340 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:24:15.422607 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:24:15.422722 | orchestrator | 2025-01-16 15:24:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 15:24:15.422744 | orchestrator | 2025-01-16 15:24:15 | INFO  | Please wait and do not abort execution. 2025-01-16 15:24:15.422766 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:24:15.422887 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:24:15.422912 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:24:15.423119 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:24:15.423242 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:24:15.423475 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:24:15.423775 | orchestrator | 2025-01-16 15:24:15.423809 | orchestrator | 2025-01-16 15:24:15.423993 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:24:15.424166 | orchestrator | Thursday 16 January 2025 15:24:15 +0000 (0:00:01.723) 0:00:11.505 ****** 2025-01-16 15:24:15.424339 | orchestrator | =============================================================================== 2025-01-16 15:24:15.425044 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.00s 2025-01-16 15:24:15.425168 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.46s 2025-01-16 15:24:15.425304 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.72s 2025-01-16 15:24:15.425503 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.57s 2025-01-16 15:24:15.720494 | orchestrator | + osism validate ceph-mons 2025-01-16 15:24:56.198905 | orchestrator | 2025-01-16 15:24:56.199013 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-01-16 15:24:56.199029 | orchestrator | 2025-01-16 15:24:56.199040 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-01-16 15:24:56.199049 | orchestrator | Thursday 16 January 2025 15:24:19 +0000 (0:00:00.982) 0:00:00.982 ****** 2025-01-16 15:24:56.199059 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:24:56.199069 | orchestrator | 2025-01-16 15:24:56.199078 | orchestrator | TASK [Create report output directory] ****************************************** 2025-01-16 15:24:56.199088 | orchestrator | Thursday 16 January 2025 15:24:22 +0000 (0:00:02.544) 0:00:03.527 ****** 2025-01-16 15:24:56.199118 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:24:56.199128 | orchestrator | 2025-01-16 15:24:56.199138 | orchestrator | TASK [Define report vars] ****************************************************** 2025-01-16 15:24:56.199147 | orchestrator | Thursday 16 January 2025 15:24:23 +0000 (0:00:01.178) 0:00:04.706 ****** 2025-01-16 15:24:56.199157 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:56.199168 | orchestrator | 2025-01-16 15:24:56.199177 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-01-16 15:24:56.199186 | orchestrator | Thursday 16 January 2025 15:24:24 +0000 (0:00:00.743) 0:00:05.449 ****** 2025-01-16 15:24:56.199195 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:56.199204 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:24:56.199214 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:24:56.199224 | orchestrator | 2025-01-16 15:24:56.199234 | orchestrator | TASK [Get container info] ****************************************************** 2025-01-16 15:24:56.199257 | orchestrator | Thursday 16 January 2025 15:24:25 +0000 (0:00:01.114) 0:00:06.564 ****** 2025-01-16 15:24:56.199266 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:24:56.199276 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:24:56.199285 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:56.199294 | orchestrator | 2025-01-16 15:24:56.199303 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-01-16 15:24:56.199313 | orchestrator | Thursday 16 January 2025 15:24:26 +0000 (0:00:01.304) 0:00:07.868 ****** 2025-01-16 15:24:56.199322 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:56.199332 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:24:56.199343 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:24:56.199353 | orchestrator | 2025-01-16 15:24:56.199363 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-01-16 15:24:56.199374 | orchestrator | Thursday 16 January 2025 15:24:27 +0000 (0:00:00.868) 0:00:08.736 ****** 2025-01-16 15:24:56.199384 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:56.199395 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:24:56.199405 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:24:56.199415 | orchestrator | 2025-01-16 15:24:56.199424 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-01-16 15:24:56.199433 | orchestrator | Thursday 16 January 2025 15:24:28 +0000 (0:00:01.001) 0:00:09.738 ****** 2025-01-16 15:24:56.199443 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:56.199453 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:24:56.199464 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:24:56.199474 | orchestrator | 2025-01-16 15:24:56.199485 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-01-16 15:24:56.199496 | orchestrator | Thursday 16 January 2025 15:24:29 +0000 (0:00:00.884) 0:00:10.622 ****** 2025-01-16 15:24:56.199507 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:56.199520 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:24:56.199552 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:24:56.199563 | orchestrator | 2025-01-16 15:24:56.199574 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-01-16 15:24:56.199585 | orchestrator | Thursday 16 January 2025 15:24:30 +0000 (0:00:00.870) 0:00:11.493 ****** 2025-01-16 15:24:56.199598 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:56.199610 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:24:56.199621 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:24:56.199631 | orchestrator | 2025-01-16 15:24:56.199644 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-01-16 15:24:56.199656 | orchestrator | Thursday 16 January 2025 15:24:31 +0000 (0:00:00.875) 0:00:12.368 ****** 2025-01-16 15:24:56.199667 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:56.199678 | orchestrator | 2025-01-16 15:24:56.199689 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-01-16 15:24:56.199700 | orchestrator | Thursday 16 January 2025 15:24:32 +0000 (0:00:00.935) 0:00:13.304 ****** 2025-01-16 15:24:56.199722 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:56.199733 | orchestrator | 2025-01-16 15:24:56.199744 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-01-16 15:24:56.199756 | orchestrator | Thursday 16 January 2025 15:24:32 +0000 (0:00:00.820) 0:00:14.125 ****** 2025-01-16 15:24:56.199767 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:56.199778 | orchestrator | 2025-01-16 15:24:56.199788 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:24:56.199799 | orchestrator | Thursday 16 January 2025 15:24:33 +0000 (0:00:00.834) 0:00:14.960 ****** 2025-01-16 15:24:56.199810 | orchestrator | 2025-01-16 15:24:56.199821 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:24:56.199831 | orchestrator | Thursday 16 January 2025 15:24:34 +0000 (0:00:00.289) 0:00:15.249 ****** 2025-01-16 15:24:56.199840 | orchestrator | 2025-01-16 15:24:56.199850 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:24:56.199859 | orchestrator | Thursday 16 January 2025 15:24:34 +0000 (0:00:00.285) 0:00:15.535 ****** 2025-01-16 15:24:56.199868 | orchestrator | 2025-01-16 15:24:56.199878 | orchestrator | TASK [Print report file information] ******************************************* 2025-01-16 15:24:56.199887 | orchestrator | Thursday 16 January 2025 15:24:34 +0000 (0:00:00.518) 0:00:16.054 ****** 2025-01-16 15:24:56.199905 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:56.199916 | orchestrator | 2025-01-16 15:24:56.199926 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-01-16 15:24:56.199935 | orchestrator | Thursday 16 January 2025 15:24:35 +0000 (0:00:00.814) 0:00:16.869 ****** 2025-01-16 15:24:56.199944 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:56.199954 | orchestrator | 2025-01-16 15:24:56.199979 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-01-16 15:24:59.179613 | orchestrator | Thursday 16 January 2025 15:24:36 +0000 (0:00:00.820) 0:00:17.689 ****** 2025-01-16 15:24:59.179735 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:59.179757 | orchestrator | 2025-01-16 15:24:59.179783 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-01-16 15:24:59.179806 | orchestrator | Thursday 16 January 2025 15:24:37 +0000 (0:00:00.752) 0:00:18.441 ****** 2025-01-16 15:24:59.179832 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:24:59.179856 | orchestrator | 2025-01-16 15:24:59.179878 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-01-16 15:24:59.179925 | orchestrator | Thursday 16 January 2025 15:24:39 +0000 (0:00:01.834) 0:00:20.276 ****** 2025-01-16 15:24:59.179949 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:59.179974 | orchestrator | 2025-01-16 15:24:59.179998 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-01-16 15:24:59.180021 | orchestrator | Thursday 16 January 2025 15:24:39 +0000 (0:00:00.802) 0:00:21.079 ****** 2025-01-16 15:24:59.180036 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:59.180050 | orchestrator | 2025-01-16 15:24:59.180064 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-01-16 15:24:59.180078 | orchestrator | Thursday 16 January 2025 15:24:40 +0000 (0:00:00.732) 0:00:21.811 ****** 2025-01-16 15:24:59.180093 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:59.180107 | orchestrator | 2025-01-16 15:24:59.180123 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-01-16 15:24:59.180139 | orchestrator | Thursday 16 January 2025 15:24:41 +0000 (0:00:00.802) 0:00:22.613 ****** 2025-01-16 15:24:59.180154 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:59.180170 | orchestrator | 2025-01-16 15:24:59.180185 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-01-16 15:24:59.180201 | orchestrator | Thursday 16 January 2025 15:24:42 +0000 (0:00:00.786) 0:00:23.400 ****** 2025-01-16 15:24:59.180216 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:59.180232 | orchestrator | 2025-01-16 15:24:59.180248 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-01-16 15:24:59.180289 | orchestrator | Thursday 16 January 2025 15:24:42 +0000 (0:00:00.734) 0:00:24.134 ****** 2025-01-16 15:24:59.180305 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:59.180321 | orchestrator | 2025-01-16 15:24:59.180336 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-01-16 15:24:59.180352 | orchestrator | Thursday 16 January 2025 15:24:43 +0000 (0:00:00.736) 0:00:24.871 ****** 2025-01-16 15:24:59.180367 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:59.180382 | orchestrator | 2025-01-16 15:24:59.180398 | orchestrator | TASK [Gather status data] ****************************************************** 2025-01-16 15:24:59.180414 | orchestrator | Thursday 16 January 2025 15:24:44 +0000 (0:00:00.731) 0:00:25.602 ****** 2025-01-16 15:24:59.180429 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:24:59.180444 | orchestrator | 2025-01-16 15:24:59.180459 | orchestrator | TASK [Set health test data] **************************************************** 2025-01-16 15:24:59.180474 | orchestrator | Thursday 16 January 2025 15:24:46 +0000 (0:00:01.629) 0:00:27.232 ****** 2025-01-16 15:24:59.180487 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:59.180501 | orchestrator | 2025-01-16 15:24:59.180515 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-01-16 15:24:59.180555 | orchestrator | Thursday 16 January 2025 15:24:46 +0000 (0:00:00.811) 0:00:28.043 ****** 2025-01-16 15:24:59.180570 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:59.180584 | orchestrator | 2025-01-16 15:24:59.180597 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-01-16 15:24:59.180612 | orchestrator | Thursday 16 January 2025 15:24:47 +0000 (0:00:00.767) 0:00:28.811 ****** 2025-01-16 15:24:59.180626 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:24:59.180643 | orchestrator | 2025-01-16 15:24:59.180658 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-01-16 15:24:59.180671 | orchestrator | Thursday 16 January 2025 15:24:48 +0000 (0:00:00.762) 0:00:29.574 ****** 2025-01-16 15:24:59.180687 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:59.180701 | orchestrator | 2025-01-16 15:24:59.180714 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-01-16 15:24:59.180728 | orchestrator | Thursday 16 January 2025 15:24:49 +0000 (0:00:00.759) 0:00:30.333 ****** 2025-01-16 15:24:59.180742 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:59.180756 | orchestrator | 2025-01-16 15:24:59.180770 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-01-16 15:24:59.180784 | orchestrator | Thursday 16 January 2025 15:24:49 +0000 (0:00:00.753) 0:00:31.086 ****** 2025-01-16 15:24:59.180798 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:24:59.180812 | orchestrator | 2025-01-16 15:24:59.180826 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-01-16 15:24:59.180840 | orchestrator | Thursday 16 January 2025 15:24:50 +0000 (0:00:00.822) 0:00:31.909 ****** 2025-01-16 15:24:59.180854 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:24:59.180868 | orchestrator | 2025-01-16 15:24:59.180882 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-01-16 15:24:59.180896 | orchestrator | Thursday 16 January 2025 15:24:51 +0000 (0:00:00.816) 0:00:32.726 ****** 2025-01-16 15:24:59.180910 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:24:59.180924 | orchestrator | 2025-01-16 15:24:59.180938 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-01-16 15:24:59.180958 | orchestrator | Thursday 16 January 2025 15:24:53 +0000 (0:00:01.862) 0:00:34.589 ****** 2025-01-16 15:24:59.180973 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:24:59.180987 | orchestrator | 2025-01-16 15:24:59.181001 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-01-16 15:24:59.181016 | orchestrator | Thursday 16 January 2025 15:24:54 +0000 (0:00:00.809) 0:00:35.398 ****** 2025-01-16 15:24:59.181030 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:24:59.181052 | orchestrator | 2025-01-16 15:24:59.181085 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:24:59.291163 | orchestrator | Thursday 16 January 2025 15:24:55 +0000 (0:00:00.814) 0:00:36.213 ****** 2025-01-16 15:24:59.291282 | orchestrator | 2025-01-16 15:24:59.291304 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:24:59.291320 | orchestrator | Thursday 16 January 2025 15:24:55 +0000 (0:00:00.285) 0:00:36.499 ****** 2025-01-16 15:24:59.291334 | orchestrator | 2025-01-16 15:24:59.291348 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:24:59.291362 | orchestrator | Thursday 16 January 2025 15:24:55 +0000 (0:00:00.281) 0:00:36.781 ****** 2025-01-16 15:24:59.291377 | orchestrator | 2025-01-16 15:24:59.291390 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-01-16 15:24:59.291404 | orchestrator | Thursday 16 January 2025 15:24:56 +0000 (0:00:00.547) 0:00:37.328 ****** 2025-01-16 15:24:59.291419 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:24:59.291433 | orchestrator | 2025-01-16 15:24:59.291447 | orchestrator | TASK [Print report file information] ******************************************* 2025-01-16 15:24:59.291460 | orchestrator | Thursday 16 January 2025 15:24:57 +0000 (0:00:01.720) 0:00:39.049 ****** 2025-01-16 15:24:59.291474 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-01-16 15:24:59.291488 | orchestrator |  "msg": [ 2025-01-16 15:24:59.291503 | orchestrator |  "Validator run completed.", 2025-01-16 15:24:59.291598 | orchestrator |  "You can find the report file here:", 2025-01-16 15:24:59.291625 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-01-16T15:24:20+00:00-report.json", 2025-01-16 15:24:59.291648 | orchestrator |  "on the following host:", 2025-01-16 15:24:59.291701 | orchestrator |  "testbed-manager" 2025-01-16 15:24:59.291729 | orchestrator |  ] 2025-01-16 15:24:59.291756 | orchestrator | } 2025-01-16 15:24:59.291780 | orchestrator | 2025-01-16 15:24:59.291798 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:24:59.291815 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-01-16 15:24:59.291830 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:24:59.291845 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:24:59.291865 | orchestrator | 2025-01-16 15:24:59.291880 | orchestrator | 2025-01-16 15:24:59.291894 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:24:59.291908 | orchestrator | Thursday 16 January 2025 15:24:59 +0000 (0:00:01.098) 0:00:40.148 ****** 2025-01-16 15:24:59.291922 | orchestrator | =============================================================================== 2025-01-16 15:24:59.291936 | orchestrator | Get timestamp for report file ------------------------------------------- 2.54s 2025-01-16 15:24:59.291950 | orchestrator | Aggregate test results step one ----------------------------------------- 1.86s 2025-01-16 15:24:59.291964 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.83s 2025-01-16 15:24:59.291978 | orchestrator | Write report file ------------------------------------------------------- 1.72s 2025-01-16 15:24:59.291992 | orchestrator | Gather status data ------------------------------------------------------ 1.63s 2025-01-16 15:24:59.292006 | orchestrator | Get container info ------------------------------------------------------ 1.30s 2025-01-16 15:24:59.292020 | orchestrator | Create report output directory ------------------------------------------ 1.18s 2025-01-16 15:24:59.292034 | orchestrator | Flush handlers ---------------------------------------------------------- 1.12s 2025-01-16 15:24:59.292048 | orchestrator | Prepare test data for container existance test -------------------------- 1.11s 2025-01-16 15:24:59.292085 | orchestrator | Print report file information ------------------------------------------- 1.10s 2025-01-16 15:24:59.292099 | orchestrator | Flush handlers ---------------------------------------------------------- 1.09s 2025-01-16 15:24:59.292114 | orchestrator | Set test result to passed if container is existing ---------------------- 1.00s 2025-01-16 15:24:59.292128 | orchestrator | Aggregate test results step one ----------------------------------------- 0.94s 2025-01-16 15:24:59.292142 | orchestrator | Prepare test data ------------------------------------------------------- 0.88s 2025-01-16 15:24:59.292156 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.88s 2025-01-16 15:24:59.292176 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.87s 2025-01-16 15:24:59.292191 | orchestrator | Set test result to failed if container is missing ----------------------- 0.87s 2025-01-16 15:24:59.292205 | orchestrator | Aggregate test results step three --------------------------------------- 0.83s 2025-01-16 15:24:59.292219 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.82s 2025-01-16 15:24:59.292234 | orchestrator | Aggregate test results step two ----------------------------------------- 0.82s 2025-01-16 15:24:59.292276 | orchestrator | + osism validate ceph-mgrs 2025-01-16 15:25:33.798483 | orchestrator | 2025-01-16 15:25:33.798682 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-01-16 15:25:33.798720 | orchestrator | 2025-01-16 15:25:33.798747 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-01-16 15:25:33.798773 | orchestrator | Thursday 16 January 2025 15:25:03 +0000 (0:00:01.032) 0:00:01.032 ****** 2025-01-16 15:25:33.798800 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:33.798827 | orchestrator | 2025-01-16 15:25:33.798853 | orchestrator | TASK [Create report output directory] ****************************************** 2025-01-16 15:25:33.798880 | orchestrator | Thursday 16 January 2025 15:25:05 +0000 (0:00:01.556) 0:00:02.588 ****** 2025-01-16 15:25:33.798898 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:33.798912 | orchestrator | 2025-01-16 15:25:33.798927 | orchestrator | TASK [Define report vars] ****************************************************** 2025-01-16 15:25:33.798941 | orchestrator | Thursday 16 January 2025 15:25:06 +0000 (0:00:01.240) 0:00:03.829 ****** 2025-01-16 15:25:33.798956 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.798973 | orchestrator | 2025-01-16 15:25:33.798989 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-01-16 15:25:33.799004 | orchestrator | Thursday 16 January 2025 15:25:07 +0000 (0:00:00.749) 0:00:04.579 ****** 2025-01-16 15:25:33.799020 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.799036 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:25:33.799052 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:25:33.799067 | orchestrator | 2025-01-16 15:25:33.799083 | orchestrator | TASK [Get container info] ****************************************************** 2025-01-16 15:25:33.799099 | orchestrator | Thursday 16 January 2025 15:25:08 +0000 (0:00:01.099) 0:00:05.678 ****** 2025-01-16 15:25:33.799115 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.799130 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:25:33.799146 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:25:33.799161 | orchestrator | 2025-01-16 15:25:33.799177 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-01-16 15:25:33.799192 | orchestrator | Thursday 16 January 2025 15:25:09 +0000 (0:00:01.277) 0:00:06.956 ****** 2025-01-16 15:25:33.799208 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:25:33.799224 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:25:33.799240 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:25:33.799255 | orchestrator | 2025-01-16 15:25:33.799271 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-01-16 15:25:33.799286 | orchestrator | Thursday 16 January 2025 15:25:10 +0000 (0:00:00.849) 0:00:07.805 ****** 2025-01-16 15:25:33.799302 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.799325 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:25:33.799349 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:25:33.799408 | orchestrator | 2025-01-16 15:25:33.799433 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-01-16 15:25:33.799457 | orchestrator | Thursday 16 January 2025 15:25:11 +0000 (0:00:00.971) 0:00:08.776 ****** 2025-01-16 15:25:33.799481 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.799505 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:25:33.799562 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:25:33.799579 | orchestrator | 2025-01-16 15:25:33.799593 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-01-16 15:25:33.799607 | orchestrator | Thursday 16 January 2025 15:25:12 +0000 (0:00:00.873) 0:00:09.649 ****** 2025-01-16 15:25:33.799621 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:25:33.799636 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:25:33.799650 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:25:33.799664 | orchestrator | 2025-01-16 15:25:33.799679 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-01-16 15:25:33.799693 | orchestrator | Thursday 16 January 2025 15:25:13 +0000 (0:00:00.854) 0:00:10.504 ****** 2025-01-16 15:25:33.799707 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.799720 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:25:33.799734 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:25:33.799748 | orchestrator | 2025-01-16 15:25:33.799762 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-01-16 15:25:33.799776 | orchestrator | Thursday 16 January 2025 15:25:13 +0000 (0:00:00.891) 0:00:11.395 ****** 2025-01-16 15:25:33.799790 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:25:33.799804 | orchestrator | 2025-01-16 15:25:33.799818 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-01-16 15:25:33.799832 | orchestrator | Thursday 16 January 2025 15:25:14 +0000 (0:00:00.953) 0:00:12.349 ****** 2025-01-16 15:25:33.799846 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:25:33.799860 | orchestrator | 2025-01-16 15:25:33.799874 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-01-16 15:25:33.799907 | orchestrator | Thursday 16 January 2025 15:25:15 +0000 (0:00:00.815) 0:00:13.165 ****** 2025-01-16 15:25:33.799931 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:25:33.799955 | orchestrator | 2025-01-16 15:25:33.799979 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:25:33.800003 | orchestrator | Thursday 16 January 2025 15:25:16 +0000 (0:00:00.847) 0:00:14.012 ****** 2025-01-16 15:25:33.800024 | orchestrator | 2025-01-16 15:25:33.800045 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:25:33.800068 | orchestrator | Thursday 16 January 2025 15:25:16 +0000 (0:00:00.290) 0:00:14.302 ****** 2025-01-16 15:25:33.800092 | orchestrator | 2025-01-16 15:25:33.800116 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:25:33.800139 | orchestrator | Thursday 16 January 2025 15:25:17 +0000 (0:00:00.287) 0:00:14.590 ****** 2025-01-16 15:25:33.800159 | orchestrator | 2025-01-16 15:25:33.800173 | orchestrator | TASK [Print report file information] ******************************************* 2025-01-16 15:25:33.800187 | orchestrator | Thursday 16 January 2025 15:25:17 +0000 (0:00:00.534) 0:00:15.124 ****** 2025-01-16 15:25:33.800201 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:25:33.800215 | orchestrator | 2025-01-16 15:25:33.800229 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-01-16 15:25:33.800244 | orchestrator | Thursday 16 January 2025 15:25:18 +0000 (0:00:00.830) 0:00:15.955 ****** 2025-01-16 15:25:33.800257 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:25:33.800271 | orchestrator | 2025-01-16 15:25:33.800303 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-01-16 15:25:33.985145 | orchestrator | Thursday 16 January 2025 15:25:19 +0000 (0:00:00.820) 0:00:16.776 ****** 2025-01-16 15:25:33.985292 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.985316 | orchestrator | 2025-01-16 15:25:33.985332 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-01-16 15:25:33.985375 | orchestrator | Thursday 16 January 2025 15:25:20 +0000 (0:00:00.731) 0:00:17.507 ****** 2025-01-16 15:25:33.985389 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:25:33.985404 | orchestrator | 2025-01-16 15:25:33.985418 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-01-16 15:25:33.985432 | orchestrator | Thursday 16 January 2025 15:25:21 +0000 (0:00:01.704) 0:00:19.211 ****** 2025-01-16 15:25:33.985446 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.985459 | orchestrator | 2025-01-16 15:25:33.985473 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-01-16 15:25:33.985487 | orchestrator | Thursday 16 January 2025 15:25:22 +0000 (0:00:00.820) 0:00:20.032 ****** 2025-01-16 15:25:33.985501 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.985550 | orchestrator | 2025-01-16 15:25:33.985569 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-01-16 15:25:33.985583 | orchestrator | Thursday 16 January 2025 15:25:23 +0000 (0:00:00.808) 0:00:20.840 ****** 2025-01-16 15:25:33.985597 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:25:33.985611 | orchestrator | 2025-01-16 15:25:33.985625 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-01-16 15:25:33.985640 | orchestrator | Thursday 16 January 2025 15:25:24 +0000 (0:00:00.742) 0:00:21.583 ****** 2025-01-16 15:25:33.985656 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:25:33.985671 | orchestrator | 2025-01-16 15:25:33.985687 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-01-16 15:25:33.985703 | orchestrator | Thursday 16 January 2025 15:25:24 +0000 (0:00:00.768) 0:00:22.351 ****** 2025-01-16 15:25:33.985718 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:33.985733 | orchestrator | 2025-01-16 15:25:33.985749 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-01-16 15:25:33.985764 | orchestrator | Thursday 16 January 2025 15:25:25 +0000 (0:00:00.818) 0:00:23.170 ****** 2025-01-16 15:25:33.985780 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:25:33.985795 | orchestrator | 2025-01-16 15:25:33.985811 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-01-16 15:25:33.985827 | orchestrator | Thursday 16 January 2025 15:25:26 +0000 (0:00:00.824) 0:00:23.995 ****** 2025-01-16 15:25:33.985843 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:33.985858 | orchestrator | 2025-01-16 15:25:33.985873 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-01-16 15:25:33.985888 | orchestrator | Thursday 16 January 2025 15:25:28 +0000 (0:00:01.762) 0:00:25.757 ****** 2025-01-16 15:25:33.985904 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:33.985919 | orchestrator | 2025-01-16 15:25:33.985933 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-01-16 15:25:33.985962 | orchestrator | Thursday 16 January 2025 15:25:29 +0000 (0:00:00.838) 0:00:26.596 ****** 2025-01-16 15:25:33.985977 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:33.985991 | orchestrator | 2025-01-16 15:25:33.986004 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:25:33.986080 | orchestrator | Thursday 16 January 2025 15:25:30 +0000 (0:00:00.848) 0:00:27.444 ****** 2025-01-16 15:25:33.986095 | orchestrator | 2025-01-16 15:25:33.986109 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:25:33.986124 | orchestrator | Thursday 16 January 2025 15:25:30 +0000 (0:00:00.282) 0:00:27.727 ****** 2025-01-16 15:25:33.986138 | orchestrator | 2025-01-16 15:25:33.986152 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:25:33.986166 | orchestrator | Thursday 16 January 2025 15:25:30 +0000 (0:00:00.382) 0:00:28.109 ****** 2025-01-16 15:25:33.986180 | orchestrator | 2025-01-16 15:25:33.986194 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-01-16 15:25:33.986208 | orchestrator | Thursday 16 January 2025 15:25:31 +0000 (0:00:00.527) 0:00:28.637 ****** 2025-01-16 15:25:33.986231 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:33.986245 | orchestrator | 2025-01-16 15:25:33.986262 | orchestrator | TASK [Print report file information] ******************************************* 2025-01-16 15:25:33.986276 | orchestrator | Thursday 16 January 2025 15:25:32 +0000 (0:00:01.430) 0:00:30.068 ****** 2025-01-16 15:25:33.986290 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-01-16 15:25:33.986304 | orchestrator |  "msg": [ 2025-01-16 15:25:33.986317 | orchestrator |  "Validator run completed.", 2025-01-16 15:25:33.986331 | orchestrator |  "You can find the report file here:", 2025-01-16 15:25:33.986345 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-01-16T15:25:03+00:00-report.json", 2025-01-16 15:25:33.986360 | orchestrator |  "on the following host:", 2025-01-16 15:25:33.986374 | orchestrator |  "testbed-manager" 2025-01-16 15:25:33.986388 | orchestrator |  ] 2025-01-16 15:25:33.986402 | orchestrator | } 2025-01-16 15:25:33.986415 | orchestrator | 2025-01-16 15:25:33.986429 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:25:33.986445 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-01-16 15:25:33.986460 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:25:33.986493 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:25:34.094843 | orchestrator | 2025-01-16 15:25:34.094986 | orchestrator | 2025-01-16 15:25:34.095018 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:25:34.095045 | orchestrator | Thursday 16 January 2025 15:25:33 +0000 (0:00:01.115) 0:00:31.183 ****** 2025-01-16 15:25:34.095069 | orchestrator | =============================================================================== 2025-01-16 15:25:34.095092 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2025-01-16 15:25:34.095114 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.70s 2025-01-16 15:25:34.095138 | orchestrator | Get timestamp for report file ------------------------------------------- 1.56s 2025-01-16 15:25:34.095163 | orchestrator | Write report file ------------------------------------------------------- 1.43s 2025-01-16 15:25:34.095188 | orchestrator | Get container info ------------------------------------------------------ 1.28s 2025-01-16 15:25:34.095210 | orchestrator | Create report output directory ------------------------------------------ 1.24s 2025-01-16 15:25:34.095234 | orchestrator | Flush handlers ---------------------------------------------------------- 1.19s 2025-01-16 15:25:34.095257 | orchestrator | Print report file information ------------------------------------------- 1.12s 2025-01-16 15:25:34.095280 | orchestrator | Flush handlers ---------------------------------------------------------- 1.11s 2025-01-16 15:25:34.095329 | orchestrator | Prepare test data for container existance test -------------------------- 1.10s 2025-01-16 15:25:34.095370 | orchestrator | Set test result to passed if container is existing ---------------------- 0.97s 2025-01-16 15:25:34.095395 | orchestrator | Aggregate test results step one ----------------------------------------- 0.95s 2025-01-16 15:25:34.095419 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.89s 2025-01-16 15:25:34.095436 | orchestrator | Prepare test data ------------------------------------------------------- 0.87s 2025-01-16 15:25:34.095453 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.85s 2025-01-16 15:25:34.095468 | orchestrator | Set test result to failed if container is missing ----------------------- 0.85s 2025-01-16 15:25:34.095484 | orchestrator | Aggregate test results step three --------------------------------------- 0.85s 2025-01-16 15:25:34.095546 | orchestrator | Aggregate test results step three --------------------------------------- 0.85s 2025-01-16 15:25:34.095585 | orchestrator | Aggregate test results step two ----------------------------------------- 0.84s 2025-01-16 15:25:34.095602 | orchestrator | Print report file information ------------------------------------------- 0.83s 2025-01-16 15:25:34.095636 | orchestrator | + osism validate ceph-osds 2025-01-16 15:25:49.363621 | orchestrator | 2025-01-16 15:25:49.363750 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-01-16 15:25:49.363780 | orchestrator | 2025-01-16 15:25:49.363800 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-01-16 15:25:49.363819 | orchestrator | Thursday 16 January 2025 15:25:38 +0000 (0:00:01.162) 0:00:01.162 ****** 2025-01-16 15:25:49.363837 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:49.363856 | orchestrator | 2025-01-16 15:25:49.363874 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-01-16 15:25:49.363892 | orchestrator | Thursday 16 January 2025 15:25:41 +0000 (0:00:02.534) 0:00:03.697 ****** 2025-01-16 15:25:49.363910 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:49.363928 | orchestrator | 2025-01-16 15:25:49.363946 | orchestrator | TASK [Create report output directory] ****************************************** 2025-01-16 15:25:49.363964 | orchestrator | Thursday 16 January 2025 15:25:41 +0000 (0:00:00.820) 0:00:04.518 ****** 2025-01-16 15:25:49.363981 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 15:25:49.363999 | orchestrator | 2025-01-16 15:25:49.364018 | orchestrator | TASK [Define report vars] ****************************************************** 2025-01-16 15:25:49.364035 | orchestrator | Thursday 16 January 2025 15:25:43 +0000 (0:00:01.185) 0:00:05.704 ****** 2025-01-16 15:25:49.364054 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:25:49.364075 | orchestrator | 2025-01-16 15:25:49.364100 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-01-16 15:25:49.364121 | orchestrator | Thursday 16 January 2025 15:25:43 +0000 (0:00:00.738) 0:00:06.442 ****** 2025-01-16 15:25:49.364142 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:25:49.364162 | orchestrator | 2025-01-16 15:25:49.364180 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-01-16 15:25:49.364200 | orchestrator | Thursday 16 January 2025 15:25:44 +0000 (0:00:00.739) 0:00:07.181 ****** 2025-01-16 15:25:49.364219 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:25:49.364239 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:25:49.364257 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:25:49.364276 | orchestrator | 2025-01-16 15:25:49.364295 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-01-16 15:25:49.364314 | orchestrator | Thursday 16 January 2025 15:25:45 +0000 (0:00:01.118) 0:00:08.300 ****** 2025-01-16 15:25:49.364332 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:25:49.364351 | orchestrator | 2025-01-16 15:25:49.364370 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-01-16 15:25:49.364389 | orchestrator | Thursday 16 January 2025 15:25:46 +0000 (0:00:00.747) 0:00:09.047 ****** 2025-01-16 15:25:49.364407 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:25:49.364425 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:25:49.364445 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:25:49.364463 | orchestrator | 2025-01-16 15:25:49.364482 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-01-16 15:25:49.364529 | orchestrator | Thursday 16 January 2025 15:25:47 +0000 (0:00:00.881) 0:00:09.929 ****** 2025-01-16 15:25:49.364549 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:25:49.364567 | orchestrator | 2025-01-16 15:25:49.364586 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-01-16 15:25:49.364604 | orchestrator | Thursday 16 January 2025 15:25:48 +0000 (0:00:00.979) 0:00:10.908 ****** 2025-01-16 15:25:49.364623 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:25:49.364643 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:25:49.364682 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:25:49.364694 | orchestrator | 2025-01-16 15:25:49.364729 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-01-16 15:25:49.364741 | orchestrator | Thursday 16 January 2025 15:25:49 +0000 (0:00:00.948) 0:00:11.857 ****** 2025-01-16 15:25:49.364753 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8482a7ae1600e68831a4f97b9eb5b72f94f61e2b05ddf0dd222213c2aab136a5', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 4 minutes (healthy)'})  2025-01-16 15:25:49.364769 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b226ac2d09f29e6c9e0da328ca0618f01db2777947ff4acccc5c96c15c994812', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 4 minutes (healthy)'})  2025-01-16 15:25:49.364781 | orchestrator | skipping: [testbed-node-3] => (item={'id': '823e672de6ce982fb37a405fa50de28dbbbaa790188a48a0ef798061b41155aa', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-01-16 15:25:49.364793 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40ab7656164df65052638bfd3ac73f7cdd37d07e62b7e911091abfca4196e630', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2025-01-16 15:25:49.364806 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4198190cade62d87cc5c388f2ae708fbf2a4daeb049c292511017d5ab586f5b0', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-01-16 15:25:49.364835 | orchestrator | skipping: [testbed-node-3] => (item={'id': '83d519ef5f6ae7c21045799337e6dbac60d9435750e5eb593de8c20c00130a1a', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-01-16 15:25:50.197875 | orchestrator | skipping: [testbed-node-3] => (item={'id': '481b97ef194b256445f4e6f4a4a7b411a341dee5edc03a9840119002fa86362f', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 8 minutes'})  2025-01-16 15:25:50.198012 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2cba95c1d349e12fa74e2731cb8c14370f29c97d62d43ede66d945b1bacc6fe4', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 8 minutes'})  2025-01-16 15:25:50.198092 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd037f03329d03b1e2c205cf0a4f612fdf8c866a63c5545faf41e612583f49aa9', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-01-16 15:25:50.198110 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1a7765e87e2913c71117f0bc3f32b7bdbe12201040ef94bdef562e8ec6cb8a59', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-rgw-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 16 minutes'})  2025-01-16 15:25:50.198126 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6daf4b81c2391250667ffe1ec36bd378e8a5518f6db5b71f0749fed6f8304dff', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 17 minutes'})  2025-01-16 15:25:50.198142 | orchestrator | skipping: [testbed-node-3] => (item={'id': '15c3ac2215062bd3ce4235350fd3c95b6a06c5a4ee09347f14a51d98c76c8e44', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 17 minutes'})  2025-01-16 15:25:50.198158 | orchestrator | ok: [testbed-node-3] => (item={'id': 'e5242f42f215473d4ecad3600f37c924a930cddede77536e571fb75f625425a9', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 18 minutes'}) 2025-01-16 15:25:50.198196 | orchestrator | ok: [testbed-node-3] => (item={'id': 'd9af8bb55b5262d2d2621b6427a94f0d4851c199882ac1aaa4d7c5b259214599', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 18 minutes'}) 2025-01-16 15:25:50.198213 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1d251c5cdc3ceb603e4b695d1d2421ed7f57364f157e21a6992ae7c976f421ac', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 21 minutes'})  2025-01-16 15:25:50.198228 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a4bc832ef0676167b17f73cefc3401fdbb570ad433664a4be1fb4ce596ba3c1d', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 22 minutes (healthy)'})  2025-01-16 15:25:50.198244 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ef005e47881b71edd03b8772c5db314ce5b3326bdf1c5657fdcedcd8e34d8a30', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 22 minutes (healthy)'})  2025-01-16 15:25:50.198259 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ff9254697bcc846cf01d872e68e327bb367a7c4f45f9c21ca5a50548a8652bc9', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'name': '/cron', 'state': 'running', 'status': 'Up 23 minutes'})  2025-01-16 15:25:50.198274 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c4c2045fb5b424fa9d0c5a65183d1f0d9ab8444ffc654ba8133a857002300eba', 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 24 minutes'})  2025-01-16 15:25:50.198294 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4f790785c1273556d3e0a9a4bb690b85252c29ca95f4952fb0bedcb14a3fcf05', 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 24 minutes'})  2025-01-16 15:25:50.198328 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd9be7f9efb7dcae7189cd72b6bf390c96a48bae4a52553fe471d601d826a1e46', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 4 minutes (healthy)'})  2025-01-16 15:25:50.198344 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bdcdfb769ea24eafa7985093caed475de2e32f3c9a612f8f94185978539eddad', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 4 minutes (healthy)'})  2025-01-16 15:25:50.198360 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f857d086f32e94ff44398f3e9768b84e1819c5889a3a9b35e8d2fa5933e8393d', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-01-16 15:25:50.198376 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fcbf2f34d392707ea221fd3e2eedabbb90695fc25dad943fdb8505918bcc295a', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2025-01-16 15:25:50.198392 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7a7c314b3d147d8b11a4b17fbaf9d081a5deb28ee6d8c1e0b6c24ec87cd0a64a', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-01-16 15:25:50.198408 | orchestrator | skipping: [testbed-node-4] => (item={'id': '504a32948ce3fdd02a1e757b98c3335f56499fd1371d76079f0a1638d50d268e', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-01-16 15:25:50.198426 | orchestrator | skipping: [testbed-node-4] => (item={'id': '022aad5da8c53bbf985238b69340d0ee135b53fa258ed2ecde0d2ceacfef368a', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 8 minutes'})  2025-01-16 15:25:50.198461 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f5e870f1a7bcb30097454c3f27f813b1bf27b5362d72688e03adf2dceb5c3b4b', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 8 minutes'})  2025-01-16 15:25:50.198476 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b9e755cf7d6855cbddf3f6c607710fbc2b8a467a0972e3c55e80a8d37d4e241d', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-01-16 15:25:50.198490 | orchestrator | skipping: [testbed-node-4] => (item={'id': '10b9b4e9cd7b0876d03f6f8331ffd046dc34b0953318088940747770835f8851', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-rgw-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 16 minutes'})  2025-01-16 15:25:50.198526 | orchestrator | skipping: [testbed-node-4] => (item={'id': '81b41b9d54489b6bafae8cdb8c92e63f793cfe6cfe3543e35dcf8c7677fe55ac', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 17 minutes'})  2025-01-16 15:25:50.198540 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a25d519c9c3c5524ec637b2ffe5dc7c5abd5ce27dc2175b24c9cf14a46c104ca', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 17 minutes'})  2025-01-16 15:25:50.198555 | orchestrator | ok: [testbed-node-4] => (item={'id': '2c09c9c17f58c5dddf2621bb39e7ff3279695d0b629b74a7951f194e9b3aa7e1', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 18 minutes'}) 2025-01-16 15:25:50.198569 | orchestrator | ok: [testbed-node-4] => (item={'id': '471bb85fc20c99f8a46a03d7b28e7ca957b206ad8da0ec9ca29e131451b39b89', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 18 minutes'}) 2025-01-16 15:25:50.198604 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ecbf166762c7de25c8ae2316545446dedfd323cbf4860623fd725cc842d95412', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 21 minutes'})  2025-01-16 15:26:07.422424 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5c9f6795b49fe4c791a25fdfba251602c5465cc3d98241ecc8415b65ab834ae2', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 22 minutes (healthy)'})  2025-01-16 15:26:07.422623 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0dd308eb2437c4950ef42c14f8b87e31a3ffeee571daf6bcb06abddfe3f6e89b', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 22 minutes (healthy)'})  2025-01-16 15:26:07.422656 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6c09d750e61b7307d6c81a66c0880607650bc9f2d98f8c80bb8f5488225ef59a', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'name': '/cron', 'state': 'running', 'status': 'Up 23 minutes'})  2025-01-16 15:26:07.422680 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2ac0325dad66c05cbfae5fe3e52c234515625251b54d3bb2a1c8483417fc23fa', 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 24 minutes'})  2025-01-16 15:26:07.422703 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9ed09bca76e1517604a3f7612ea1c872063e2e571363faee21445d4f72efd9d4', 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 24 minutes'})  2025-01-16 15:26:07.422727 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd7ddecce7a9ebac4609ce358c146a664e4d7dff7aacf1d9f0f87e6e8671f8cd2', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-compute:2024.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 4 minutes (healthy)'})  2025-01-16 15:26:07.422779 | orchestrator | skipping: [testbed-node-5] => (item={'id': '46a0c7f0e085d6b99cb6c71a2c3e36e3c7018ac7b9c8c8800bec39222a9217f5', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-libvirt:2024.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 4 minutes (healthy)'})  2025-01-16 15:26:07.422804 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e9af7797ce314871c4bbd661ca20f590bccd42365d1b0a821a0fafaf97b24b19', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-ssh:2024.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-01-16 15:26:07.422823 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e7cd95e28e448a85fcea077adff36283b14f2f65880d8c6858c50799e78c6034', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-libvirt-exporter:2024.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2025-01-16 15:26:07.422839 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cc57b3d1fd5c1311845dcdc7bd23055a46056124f4d5caf038580d3c102f30aa', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-01-16 15:26:07.422854 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b4fd32ed2133deb62ad261d7d320ddc9e71f6a689b873d433269f657bd9dc251', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-01-16 15:26:07.422869 | orchestrator | skipping: [testbed-node-5] => (item={'id': '24fdbc11c067249d892087798f26a52ec22da56cfaaec82e4a4748c9c00bf5bf', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-cadvisor:2024.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 8 minutes'})  2025-01-16 15:26:07.422883 | orchestrator | skipping: [testbed-node-5] => (item={'id': '114c15faeb13641169d4425c121606878e41bb88218f4a697c9ab29071c9c293', 'image': 'nexus.testbed.osism.xyz:8193/kolla/prometheus-node-exporter:2024.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 8 minutes'})  2025-01-16 15:26:07.422898 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fccd28e6404b4cd19119aeb533356cebcb2df4419931ae2c12cab5ba6f10de77', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-01-16 15:26:07.422931 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c6b19c5ef71a0b196b0e6dbedabd402d90dabba71ff83382e73f742a66f5449f', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-rgw-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 16 minutes'})  2025-01-16 15:26:07.422967 | orchestrator | skipping: [testbed-node-5] => (item={'id': '45cfcd3a23e6e81d412e0326ea45e4e3ed64015cd38fb5ebb968fe5ae7b243ee', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 17 minutes'})  2025-01-16 15:26:07.422992 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd97e28acbc2990c0ed136d60150791286cb0f83e23c83816746fffe28c4be5f6', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 17 minutes'})  2025-01-16 15:26:07.423017 | orchestrator | ok: [testbed-node-5] => (item={'id': '5b7c3666a51104b17754de235a42c991a10c708e45c47422e29a4633e7eb48b7', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 18 minutes'}) 2025-01-16 15:26:07.423043 | orchestrator | ok: [testbed-node-5] => (item={'id': 'fe17d9bd86489a8e8f2d64b7413ef09bcb497ccbce6ded96dfcd51e3a2f04f19', 'image': 'nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 18 minutes'}) 2025-01-16 15:26:07.423071 | orchestrator | skipping: [testbed-node-5] => (item={'id': '298ce0e0f118046fc65b602c36292b098b4868cadc392f2f597827f49d335593', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 21 minutes'})  2025-01-16 15:26:07.423088 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b7db504f65cd37d98d279e5335ec83351abd8875f01dd6ed0cb6482c44ecd669', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 22 minutes (healthy)'})  2025-01-16 15:26:07.423104 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b84c29bb6e1700dbe51771c37a0f62977ebbbbb4722342bcfb5f2e043dbd518c', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 22 minutes (healthy)'})  2025-01-16 15:26:07.423120 | orchestrator | skipping: [testbed-node-5] => (item={'id': '294d7972515d68c9fb10b9fdb0226cbd9736d47ab99cdadd055c9037fd5ae487', 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'name': '/cron', 'state': 'running', 'status': 'Up 23 minutes'})  2025-01-16 15:26:07.423136 | orchestrator | skipping: [testbed-node-5] => (item={'id': '983c5946737c8dccca6736a97631d38f38ce51d488f5eb7a8e0ea212b4132c59', 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 24 minutes'})  2025-01-16 15:26:07.423153 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7fbf02527b4d0d57b909863dabb265a60de17243275a15143f1f96ed86281ced', 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 24 minutes'})  2025-01-16 15:26:07.423169 | orchestrator | 2025-01-16 15:26:07.423186 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-01-16 15:26:07.423203 | orchestrator | Thursday 16 January 2025 15:25:50 +0000 (0:00:00.977) 0:00:12.834 ****** 2025-01-16 15:26:07.423219 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:07.423237 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:07.423253 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:07.423268 | orchestrator | 2025-01-16 15:26:07.423288 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-01-16 15:26:07.423305 | orchestrator | Thursday 16 January 2025 15:25:51 +0000 (0:00:00.881) 0:00:13.716 ****** 2025-01-16 15:26:07.423321 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:07.423341 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:26:07.423364 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:26:07.423387 | orchestrator | 2025-01-16 15:26:07.423409 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-01-16 15:26:07.423431 | orchestrator | Thursday 16 January 2025 15:25:51 +0000 (0:00:00.878) 0:00:14.595 ****** 2025-01-16 15:26:07.423452 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:07.423475 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:07.423528 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:07.423552 | orchestrator | 2025-01-16 15:26:07.423584 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-01-16 15:26:07.423608 | orchestrator | Thursday 16 January 2025 15:25:52 +0000 (0:00:00.985) 0:00:15.581 ****** 2025-01-16 15:26:07.423631 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:07.423654 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:07.423680 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:07.423704 | orchestrator | 2025-01-16 15:26:07.423727 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-01-16 15:26:07.423762 | orchestrator | Thursday 16 January 2025 15:25:53 +0000 (0:00:00.851) 0:00:16.432 ****** 2025-01-16 15:26:32.165527 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-01-16 15:26:32.165620 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-01-16 15:26:32.165643 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.165650 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-01-16 15:26:32.165656 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-01-16 15:26:32.165662 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:26:32.165668 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-01-16 15:26:32.165674 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-01-16 15:26:32.165680 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:26:32.165686 | orchestrator | 2025-01-16 15:26:32.165692 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-01-16 15:26:32.165699 | orchestrator | Thursday 16 January 2025 15:25:54 +0000 (0:00:00.859) 0:00:17.291 ****** 2025-01-16 15:26:32.165705 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.165711 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.165717 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.165723 | orchestrator | 2025-01-16 15:26:32.165729 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-01-16 15:26:32.165734 | orchestrator | Thursday 16 January 2025 15:25:55 +0000 (0:00:00.875) 0:00:18.167 ****** 2025-01-16 15:26:32.165740 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.165745 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:26:32.165751 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:26:32.165756 | orchestrator | 2025-01-16 15:26:32.165762 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-01-16 15:26:32.165768 | orchestrator | Thursday 16 January 2025 15:25:56 +0000 (0:00:00.870) 0:00:19.037 ****** 2025-01-16 15:26:32.165773 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.165778 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:26:32.165784 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:26:32.165789 | orchestrator | 2025-01-16 15:26:32.165795 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-01-16 15:26:32.165801 | orchestrator | Thursday 16 January 2025 15:25:57 +0000 (0:00:00.853) 0:00:19.891 ****** 2025-01-16 15:26:32.165806 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.165812 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.165817 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.165823 | orchestrator | 2025-01-16 15:26:32.165828 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-01-16 15:26:32.165834 | orchestrator | Thursday 16 January 2025 15:25:58 +0000 (0:00:00.869) 0:00:20.760 ****** 2025-01-16 15:26:32.165840 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.165845 | orchestrator | 2025-01-16 15:26:32.165851 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-01-16 15:26:32.165856 | orchestrator | Thursday 16 January 2025 15:25:59 +0000 (0:00:00.952) 0:00:21.713 ****** 2025-01-16 15:26:32.165862 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.165867 | orchestrator | 2025-01-16 15:26:32.165873 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-01-16 15:26:32.165879 | orchestrator | Thursday 16 January 2025 15:26:00 +0000 (0:00:00.972) 0:00:22.685 ****** 2025-01-16 15:26:32.165884 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.165890 | orchestrator | 2025-01-16 15:26:32.165895 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:26:32.165901 | orchestrator | Thursday 16 January 2025 15:26:00 +0000 (0:00:00.826) 0:00:23.512 ****** 2025-01-16 15:26:32.165906 | orchestrator | 2025-01-16 15:26:32.165912 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:26:32.165917 | orchestrator | Thursday 16 January 2025 15:26:01 +0000 (0:00:00.288) 0:00:23.800 ****** 2025-01-16 15:26:32.165923 | orchestrator | 2025-01-16 15:26:32.165929 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:26:32.165938 | orchestrator | Thursday 16 January 2025 15:26:01 +0000 (0:00:00.287) 0:00:24.087 ****** 2025-01-16 15:26:32.165943 | orchestrator | 2025-01-16 15:26:32.165949 | orchestrator | TASK [Print report file information] ******************************************* 2025-01-16 15:26:32.165954 | orchestrator | Thursday 16 January 2025 15:26:01 +0000 (0:00:00.521) 0:00:24.608 ****** 2025-01-16 15:26:32.165960 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.165965 | orchestrator | 2025-01-16 15:26:32.165971 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-01-16 15:26:32.165976 | orchestrator | Thursday 16 January 2025 15:26:02 +0000 (0:00:00.820) 0:00:25.429 ****** 2025-01-16 15:26:32.165982 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.165987 | orchestrator | 2025-01-16 15:26:32.165993 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-01-16 15:26:32.165999 | orchestrator | Thursday 16 January 2025 15:26:03 +0000 (0:00:00.814) 0:00:26.244 ****** 2025-01-16 15:26:32.166004 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166010 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.166054 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.166060 | orchestrator | 2025-01-16 15:26:32.166066 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-01-16 15:26:32.166083 | orchestrator | Thursday 16 January 2025 15:26:04 +0000 (0:00:00.842) 0:00:27.087 ****** 2025-01-16 15:26:32.166090 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166096 | orchestrator | 2025-01-16 15:26:32.166102 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-01-16 15:26:32.166108 | orchestrator | Thursday 16 January 2025 15:26:05 +0000 (0:00:00.805) 0:00:27.893 ****** 2025-01-16 15:26:32.166115 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-01-16 15:26:32.166121 | orchestrator | 2025-01-16 15:26:32.166139 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-01-16 15:26:32.166145 | orchestrator | Thursday 16 January 2025 15:26:07 +0000 (0:00:02.162) 0:00:30.056 ****** 2025-01-16 15:26:32.166170 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166177 | orchestrator | 2025-01-16 15:26:32.166183 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-01-16 15:26:32.166190 | orchestrator | Thursday 16 January 2025 15:26:08 +0000 (0:00:00.749) 0:00:30.805 ****** 2025-01-16 15:26:32.166196 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166202 | orchestrator | 2025-01-16 15:26:32.166208 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-01-16 15:26:32.166214 | orchestrator | Thursday 16 January 2025 15:26:08 +0000 (0:00:00.801) 0:00:31.606 ****** 2025-01-16 15:26:32.166220 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.166226 | orchestrator | 2025-01-16 15:26:32.166233 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-01-16 15:26:32.166239 | orchestrator | Thursday 16 January 2025 15:26:09 +0000 (0:00:00.715) 0:00:32.322 ****** 2025-01-16 15:26:32.166245 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166251 | orchestrator | 2025-01-16 15:26:32.166258 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-01-16 15:26:32.166264 | orchestrator | Thursday 16 January 2025 15:26:10 +0000 (0:00:00.723) 0:00:33.046 ****** 2025-01-16 15:26:32.166270 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166276 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.166554 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.166567 | orchestrator | 2025-01-16 15:26:32.166574 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-01-16 15:26:32.166581 | orchestrator | Thursday 16 January 2025 15:26:11 +0000 (0:00:00.887) 0:00:33.933 ****** 2025-01-16 15:26:32.166587 | orchestrator | changed: [testbed-node-3] 2025-01-16 15:26:32.166593 | orchestrator | changed: [testbed-node-4] 2025-01-16 15:26:32.166599 | orchestrator | changed: [testbed-node-5] 2025-01-16 15:26:32.166604 | orchestrator | 2025-01-16 15:26:32.166618 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-01-16 15:26:32.166623 | orchestrator | Thursday 16 January 2025 15:26:12 +0000 (0:00:01.655) 0:00:35.588 ****** 2025-01-16 15:26:32.166629 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166635 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.166643 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.166659 | orchestrator | 2025-01-16 15:26:32.166668 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-01-16 15:26:32.166677 | orchestrator | Thursday 16 January 2025 15:26:13 +0000 (0:00:01.000) 0:00:36.589 ****** 2025-01-16 15:26:32.166686 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166701 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.166710 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.166717 | orchestrator | 2025-01-16 15:26:32.166726 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-01-16 15:26:32.166735 | orchestrator | Thursday 16 January 2025 15:26:14 +0000 (0:00:00.918) 0:00:37.507 ****** 2025-01-16 15:26:32.166744 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.166752 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:26:32.166761 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:26:32.166770 | orchestrator | 2025-01-16 15:26:32.166779 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-01-16 15:26:32.166788 | orchestrator | Thursday 16 January 2025 15:26:15 +0000 (0:00:00.873) 0:00:38.380 ****** 2025-01-16 15:26:32.166797 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166806 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.166814 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.166823 | orchestrator | 2025-01-16 15:26:32.166832 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-01-16 15:26:32.166840 | orchestrator | Thursday 16 January 2025 15:26:16 +0000 (0:00:00.897) 0:00:39.278 ****** 2025-01-16 15:26:32.166850 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.166856 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:26:32.166861 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:26:32.166867 | orchestrator | 2025-01-16 15:26:32.166872 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-01-16 15:26:32.166878 | orchestrator | Thursday 16 January 2025 15:26:17 +0000 (0:00:00.882) 0:00:40.161 ****** 2025-01-16 15:26:32.166884 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.166889 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:26:32.166895 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:26:32.166900 | orchestrator | 2025-01-16 15:26:32.166906 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-01-16 15:26:32.166911 | orchestrator | Thursday 16 January 2025 15:26:18 +0000 (0:00:00.884) 0:00:41.045 ****** 2025-01-16 15:26:32.166917 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166922 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.166928 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.166933 | orchestrator | 2025-01-16 15:26:32.166939 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-01-16 15:26:32.166945 | orchestrator | Thursday 16 January 2025 15:26:19 +0000 (0:00:00.953) 0:00:41.999 ****** 2025-01-16 15:26:32.166951 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166956 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.166962 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.166967 | orchestrator | 2025-01-16 15:26:32.166973 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-01-16 15:26:32.166982 | orchestrator | Thursday 16 January 2025 15:26:20 +0000 (0:00:01.031) 0:00:43.031 ****** 2025-01-16 15:26:32.166988 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.166993 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.166999 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.167004 | orchestrator | 2025-01-16 15:26:32.167010 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-01-16 15:26:32.167020 | orchestrator | Thursday 16 January 2025 15:26:21 +0000 (0:00:00.892) 0:00:43.923 ****** 2025-01-16 15:26:32.167031 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.167037 | orchestrator | skipping: [testbed-node-4] 2025-01-16 15:26:32.167043 | orchestrator | skipping: [testbed-node-5] 2025-01-16 15:26:32.167048 | orchestrator | 2025-01-16 15:26:32.167054 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-01-16 15:26:32.167070 | orchestrator | Thursday 16 January 2025 15:26:22 +0000 (0:00:00.871) 0:00:44.795 ****** 2025-01-16 15:26:32.298451 | orchestrator | ok: [testbed-node-3] 2025-01-16 15:26:32.298555 | orchestrator | ok: [testbed-node-4] 2025-01-16 15:26:32.298562 | orchestrator | ok: [testbed-node-5] 2025-01-16 15:26:32.298567 | orchestrator | 2025-01-16 15:26:32.298573 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-01-16 15:26:32.298580 | orchestrator | Thursday 16 January 2025 15:26:23 +0000 (0:00:00.877) 0:00:45.672 ****** 2025-01-16 15:26:32.298586 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 15:26:32.298591 | orchestrator | 2025-01-16 15:26:32.298596 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-01-16 15:26:32.298602 | orchestrator | Thursday 16 January 2025 15:26:23 +0000 (0:00:00.946) 0:00:46.619 ****** 2025-01-16 15:26:32.298607 | orchestrator | skipping: [testbed-node-3] 2025-01-16 15:26:32.298612 | orchestrator | 2025-01-16 15:26:32.298618 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-01-16 15:26:32.298623 | orchestrator | Thursday 16 January 2025 15:26:24 +0000 (0:00:00.823) 0:00:47.442 ****** 2025-01-16 15:26:32.298628 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 15:26:32.298633 | orchestrator | 2025-01-16 15:26:32.298637 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-01-16 15:26:32.298642 | orchestrator | Thursday 16 January 2025 15:26:26 +0000 (0:00:01.704) 0:00:49.147 ****** 2025-01-16 15:26:32.298647 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 15:26:32.298652 | orchestrator | 2025-01-16 15:26:32.298657 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-01-16 15:26:32.298662 | orchestrator | Thursday 16 January 2025 15:26:27 +0000 (0:00:00.889) 0:00:50.037 ****** 2025-01-16 15:26:32.298668 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 15:26:32.298719 | orchestrator | 2025-01-16 15:26:32.298726 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:26:32.298731 | orchestrator | Thursday 16 January 2025 15:26:28 +0000 (0:00:00.846) 0:00:50.884 ****** 2025-01-16 15:26:32.298736 | orchestrator | 2025-01-16 15:26:32.298741 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:26:32.298747 | orchestrator | Thursday 16 January 2025 15:26:28 +0000 (0:00:00.298) 0:00:51.182 ****** 2025-01-16 15:26:32.298752 | orchestrator | 2025-01-16 15:26:32.298757 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-01-16 15:26:32.298762 | orchestrator | Thursday 16 January 2025 15:26:28 +0000 (0:00:00.305) 0:00:51.487 ****** 2025-01-16 15:26:32.298767 | orchestrator | 2025-01-16 15:26:32.298772 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-01-16 15:26:32.298776 | orchestrator | Thursday 16 January 2025 15:26:29 +0000 (0:00:00.527) 0:00:52.014 ****** 2025-01-16 15:26:32.298782 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-01-16 15:26:32.298787 | orchestrator | 2025-01-16 15:26:32.298792 | orchestrator | TASK [Print report file information] ******************************************* 2025-01-16 15:26:32.298797 | orchestrator | Thursday 16 January 2025 15:26:30 +0000 (0:00:01.454) 0:00:53.469 ****** 2025-01-16 15:26:32.298802 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-01-16 15:26:32.298807 | orchestrator |  "msg": [ 2025-01-16 15:26:32.298812 | orchestrator |  "Validator run completed.", 2025-01-16 15:26:32.298817 | orchestrator |  "You can find the report file here:", 2025-01-16 15:26:32.298822 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-01-16T15:25:38+00:00-report.json", 2025-01-16 15:26:32.298842 | orchestrator |  "on the following host:", 2025-01-16 15:26:32.298848 | orchestrator |  "testbed-manager" 2025-01-16 15:26:32.298853 | orchestrator |  ] 2025-01-16 15:26:32.298858 | orchestrator | } 2025-01-16 15:26:32.298863 | orchestrator | 2025-01-16 15:26:32.298868 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:26:32.298874 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-01-16 15:26:32.298880 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-01-16 15:26:32.298886 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-01-16 15:26:32.298891 | orchestrator | 2025-01-16 15:26:32.298895 | orchestrator | 2025-01-16 15:26:32.298900 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:26:32.298905 | orchestrator | Thursday 16 January 2025 15:26:32 +0000 (0:00:01.328) 0:00:54.797 ****** 2025-01-16 15:26:32.298910 | orchestrator | =============================================================================== 2025-01-16 15:26:32.298915 | orchestrator | Get timestamp for report file ------------------------------------------- 2.53s 2025-01-16 15:26:32.298920 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.16s 2025-01-16 15:26:32.298925 | orchestrator | Aggregate test results step one ----------------------------------------- 1.70s 2025-01-16 15:26:32.298939 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.66s 2025-01-16 15:26:32.298944 | orchestrator | Write report file ------------------------------------------------------- 1.45s 2025-01-16 15:26:32.298949 | orchestrator | Print report file information ------------------------------------------- 1.33s 2025-01-16 15:26:32.298954 | orchestrator | Create report output directory ------------------------------------------ 1.19s 2025-01-16 15:26:32.298959 | orchestrator | Flush handlers ---------------------------------------------------------- 1.13s 2025-01-16 15:26:32.298964 | orchestrator | Calculate OSD devices for each host ------------------------------------- 1.12s 2025-01-16 15:26:32.298980 | orchestrator | Flush handlers ---------------------------------------------------------- 1.10s 2025-01-16 15:26:32.402780 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.03s 2025-01-16 15:26:32.402885 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 1.00s 2025-01-16 15:26:32.402901 | orchestrator | Set test result to passed if count matches ------------------------------ 0.99s 2025-01-16 15:26:32.402913 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.98s 2025-01-16 15:26:32.402925 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.98s 2025-01-16 15:26:32.402937 | orchestrator | Aggregate test results step two ----------------------------------------- 0.97s 2025-01-16 15:26:32.402948 | orchestrator | Prepare test data ------------------------------------------------------- 0.95s 2025-01-16 15:26:32.402959 | orchestrator | Aggregate test results step one ----------------------------------------- 0.95s 2025-01-16 15:26:32.402970 | orchestrator | Prepare test data ------------------------------------------------------- 0.95s 2025-01-16 15:26:32.402982 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.95s 2025-01-16 15:26:32.403009 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure-services.sh 2025-01-16 15:26:32.406220 | orchestrator | + set -e 2025-01-16 15:26:32.406287 | orchestrator | + source /opt/manager-vars.sh 2025-01-16 15:26:32.406303 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-01-16 15:26:32.406316 | orchestrator | ++ NUMBER_OF_NODES=6 2025-01-16 15:26:32.406328 | orchestrator | ++ export CEPH_VERSION=quincy 2025-01-16 15:26:32.406339 | orchestrator | ++ CEPH_VERSION=quincy 2025-01-16 15:26:32.406351 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-01-16 15:26:32.406385 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-01-16 15:26:32.406408 | orchestrator | ++ export MANAGER_VERSION=latest 2025-01-16 15:26:32.406420 | orchestrator | ++ MANAGER_VERSION=latest 2025-01-16 15:26:32.406432 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-01-16 15:26:32.406443 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-01-16 15:26:32.406454 | orchestrator | ++ export ARA=false 2025-01-16 15:26:32.406466 | orchestrator | ++ ARA=false 2025-01-16 15:26:32.406502 | orchestrator | ++ export TEMPEST=false 2025-01-16 15:26:32.406517 | orchestrator | ++ TEMPEST=false 2025-01-16 15:26:32.406528 | orchestrator | ++ export IS_ZUUL=true 2025-01-16 15:26:32.406540 | orchestrator | ++ IS_ZUUL=true 2025-01-16 15:26:32.406552 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 15:26:32.406564 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.54 2025-01-16 15:26:32.406575 | orchestrator | ++ export EXTERNAL_API=false 2025-01-16 15:26:32.406586 | orchestrator | ++ EXTERNAL_API=false 2025-01-16 15:26:32.406597 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-01-16 15:26:32.406608 | orchestrator | ++ IMAGE_USER=ubuntu 2025-01-16 15:26:32.406619 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-01-16 15:26:32.406630 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-01-16 15:26:32.406641 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-01-16 15:26:32.406652 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-01-16 15:26:32.406663 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-01-16 15:26:32.406674 | orchestrator | + source /etc/os-release 2025-01-16 15:26:32.406686 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.1 LTS' 2025-01-16 15:26:32.406697 | orchestrator | ++ NAME=Ubuntu 2025-01-16 15:26:32.406708 | orchestrator | ++ VERSION_ID=24.04 2025-01-16 15:26:32.406719 | orchestrator | ++ VERSION='24.04.1 LTS (Noble Numbat)' 2025-01-16 15:26:32.406730 | orchestrator | ++ VERSION_CODENAME=noble 2025-01-16 15:26:32.406742 | orchestrator | ++ ID=ubuntu 2025-01-16 15:26:32.406753 | orchestrator | ++ ID_LIKE=debian 2025-01-16 15:26:32.406771 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-01-16 15:26:32.417930 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-01-16 15:26:32.418065 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-01-16 15:26:32.418081 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-01-16 15:26:32.418094 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-01-16 15:26:32.418105 | orchestrator | ++ LOGO=ubuntu-logo 2025-01-16 15:26:32.418117 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-01-16 15:26:32.418129 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-01-16 15:26:32.418141 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-01-16 15:26:32.418165 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-01-16 15:26:44.514801 | orchestrator | 2025-01-16 15:26:44.591230 | orchestrator | # Status of Elasticsearch 2025-01-16 15:26:44.591365 | orchestrator | 2025-01-16 15:26:44.591389 | orchestrator | + pushd /opt/configuration/contrib 2025-01-16 15:26:44.591407 | orchestrator | + echo 2025-01-16 15:26:44.591422 | orchestrator | + echo '# Status of Elasticsearch' 2025-01-16 15:26:44.591437 | orchestrator | + echo 2025-01-16 15:26:44.591451 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-01-16 15:26:44.591556 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 8; active_shards: 19; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=8 'active'=19 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-01-16 15:26:44.605952 | orchestrator | 2025-01-16 15:26:44.606127 | orchestrator | # Status of MariaDB 2025-01-16 15:26:44.606158 | orchestrator | 2025-01-16 15:26:44.606172 | orchestrator | + echo 2025-01-16 15:26:44.606182 | orchestrator | + echo '# Status of MariaDB' 2025-01-16 15:26:44.606191 | orchestrator | + echo 2025-01-16 15:26:44.606202 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root -p password -H api-int.testbed.osism.xyz -c 1 2025-01-16 15:26:44.618382 | orchestrator | Reading package lists... 2025-01-16 15:26:44.782817 | orchestrator | Building dependency tree... 2025-01-16 15:26:44.782982 | orchestrator | Reading state information... 2025-01-16 15:26:44.984808 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-01-16 15:26:45.061396 | orchestrator | bc set to manually installed. 2025-01-16 15:26:45.061513 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded. 2025-01-16 15:26:45.061557 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-01-16 15:26:45.101375 | orchestrator | 2025-01-16 15:26:45.101455 | orchestrator | # Status of Prometheus 2025-01-16 15:26:45.101463 | orchestrator | 2025-01-16 15:26:45.101469 | orchestrator | + echo 2025-01-16 15:26:45.101475 | orchestrator | + echo '# Status of Prometheus' 2025-01-16 15:26:45.101505 | orchestrator | + echo 2025-01-16 15:26:45.101511 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-01-16 15:26:45.101529 | orchestrator | Unauthorized 2025-01-16 15:26:45.102953 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-01-16 15:26:45.138213 | orchestrator | Unauthorized 2025-01-16 15:26:45.139628 | orchestrator | 2025-01-16 15:26:45.377695 | orchestrator | # Status of RabbitMQ 2025-01-16 15:26:45.377817 | orchestrator | 2025-01-16 15:26:45.377836 | orchestrator | + echo 2025-01-16 15:26:45.377849 | orchestrator | + echo '# Status of RabbitMQ' 2025-01-16 15:26:45.377864 | orchestrator | + echo 2025-01-16 15:26:45.377878 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-01-16 15:26:45.377909 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-01-16 15:26:45.382310 | orchestrator | 2025-01-16 15:26:45.385558 | orchestrator | # Status of Redis 2025-01-16 15:26:45.385623 | orchestrator | 2025-01-16 15:26:45.385635 | orchestrator | + echo 2025-01-16 15:26:45.385644 | orchestrator | + echo '# Status of Redis' 2025-01-16 15:26:45.385654 | orchestrator | + echo 2025-01-16 15:26:45.385664 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-01-16 15:26:45.385684 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001375s;;;0.000000;10.000000 2025-01-16 15:26:46.342218 | orchestrator | 2025-01-16 15:26:46.342297 | orchestrator | # Create backup of MariaDB database 2025-01-16 15:26:46.342306 | orchestrator | + popd 2025-01-16 15:26:46.342311 | orchestrator | + echo 2025-01-16 15:26:46.342316 | orchestrator | + echo '# Create backup of MariaDB database' 2025-01-16 15:26:46.342322 | orchestrator | + echo 2025-01-16 15:26:46.342328 | orchestrator | 2025-01-16 15:26:46.342334 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-01-16 15:26:46.342350 | orchestrator | 2025-01-16 15:26:46 | INFO  | Task 69634a5f-c53e-49d9-9ed1-19d5da3d6c93 (mariadb_backup) was prepared for execution. 2025-01-16 15:26:48.339872 | orchestrator | 2025-01-16 15:26:46 | INFO  | It takes a moment until task 69634a5f-c53e-49d9-9ed1-19d5da3d6c93 (mariadb_backup) has been started and output is visible here. 2025-01-16 15:26:48.340010 | orchestrator | 2025-01-16 15:26:48.462186 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:26:48.462304 | orchestrator | 2025-01-16 15:26:48.462322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:26:48.462336 | orchestrator | Thursday 16 January 2025 15:26:48 +0000 (0:00:00.137) 0:00:00.137 ****** 2025-01-16 15:26:48.462366 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:26:48.576569 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:26:48.576752 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:26:48.576773 | orchestrator | 2025-01-16 15:26:48.576788 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:26:48.577023 | orchestrator | Thursday 16 January 2025 15:26:48 +0000 (0:00:00.237) 0:00:00.375 ****** 2025-01-16 15:26:48.899125 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-01-16 15:26:48.899372 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-01-16 15:26:48.899406 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-01-16 15:26:48.899430 | orchestrator | 2025-01-16 15:26:48.899629 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-01-16 15:26:48.899900 | orchestrator | 2025-01-16 15:26:48.900025 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-01-16 15:26:49.120704 | orchestrator | Thursday 16 January 2025 15:26:48 +0000 (0:00:00.323) 0:00:00.698 ****** 2025-01-16 15:26:49.120834 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:26:49.120967 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-01-16 15:26:49.120995 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-01-16 15:26:49.121000 | orchestrator | 2025-01-16 15:26:49.121006 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-01-16 15:26:49.121014 | orchestrator | Thursday 16 January 2025 15:26:49 +0000 (0:00:00.221) 0:00:00.920 ****** 2025-01-16 15:26:49.483245 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:26:51.567745 | orchestrator | 2025-01-16 15:26:51.567884 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-01-16 15:26:51.567895 | orchestrator | Thursday 16 January 2025 15:26:49 +0000 (0:00:00.360) 0:00:01.281 ****** 2025-01-16 15:26:51.567913 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:26:51.568050 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:26:51.568062 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:26:51.568069 | orchestrator | 2025-01-16 15:26:51.568080 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-01-16 15:26:51.569724 | orchestrator | Thursday 16 January 2025 15:26:51 +0000 (0:00:02.085) 0:00:03.366 ****** 2025-01-16 15:27:05.928275 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-01-16 15:27:05.979677 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-01-16 15:27:05.979786 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-01-16 15:27:05.979800 | orchestrator | mariadb_bootstrap_restart 2025-01-16 15:27:05.979824 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:27:05.982463 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:27:05.982623 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:27:05.983053 | orchestrator | 2025-01-16 15:27:05.983329 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-01-16 15:27:05.983734 | orchestrator | skipping: no hosts matched 2025-01-16 15:27:05.984111 | orchestrator | 2025-01-16 15:27:05.984643 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-01-16 15:27:05.984922 | orchestrator | skipping: no hosts matched 2025-01-16 15:27:05.985224 | orchestrator | 2025-01-16 15:27:05.985696 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-01-16 15:27:05.988909 | orchestrator | skipping: no hosts matched 2025-01-16 15:27:05.989242 | orchestrator | 2025-01-16 15:27:05.989557 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-01-16 15:27:05.995056 | orchestrator | 2025-01-16 15:27:06.184391 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-01-16 15:27:06.184532 | orchestrator | Thursday 16 January 2025 15:27:05 +0000 (0:00:14.412) 0:00:17.779 ****** 2025-01-16 15:27:06.184571 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:27:06.257724 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:27:06.376219 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:27:06.376345 | orchestrator | 2025-01-16 15:27:06.376366 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-01-16 15:27:06.376381 | orchestrator | Thursday 16 January 2025 15:27:06 +0000 (0:00:00.275) 0:00:18.055 ****** 2025-01-16 15:27:06.376413 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:27:06.394382 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:27:06.394770 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:27:06.394805 | orchestrator | 2025-01-16 15:27:06.394825 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:27:06.395091 | orchestrator | 2025-01-16 15:27:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 15:27:06.395109 | orchestrator | 2025-01-16 15:27:06 | INFO  | Please wait and do not abort execution. 2025-01-16 15:27:06.395124 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:27:06.395263 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 15:27:06.395513 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 15:27:06.395744 | orchestrator | 2025-01-16 15:27:06.395888 | orchestrator | 2025-01-16 15:27:06.396121 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:27:06.396329 | orchestrator | Thursday 16 January 2025 15:27:06 +0000 (0:00:00.139) 0:00:18.194 ****** 2025-01-16 15:27:06.396548 | orchestrator | =============================================================================== 2025-01-16 15:27:06.396793 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 14.41s 2025-01-16 15:27:06.397008 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.09s 2025-01-16 15:27:06.397090 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.36s 2025-01-16 15:27:06.397320 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2025-01-16 15:27:06.397398 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.28s 2025-01-16 15:27:06.397677 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2025-01-16 15:27:06.397764 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.22s 2025-01-16 15:27:06.397925 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.14s 2025-01-16 15:27:06.679879 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-01-16 15:27:07.658780 | orchestrator | 2025-01-16 15:27:07 | INFO  | Task e679b228-031b-4e78-b4da-f2dd1ad720dd (mariadb_backup) was prepared for execution. 2025-01-16 15:27:09.631848 | orchestrator | 2025-01-16 15:27:07 | INFO  | It takes a moment until task e679b228-031b-4e78-b4da-f2dd1ad720dd (mariadb_backup) has been started and output is visible here. 2025-01-16 15:27:09.632029 | orchestrator | 2025-01-16 15:27:09.750750 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-01-16 15:27:09.750881 | orchestrator | 2025-01-16 15:27:09.750902 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-01-16 15:27:09.750929 | orchestrator | Thursday 16 January 2025 15:27:09 +0000 (0:00:00.138) 0:00:00.138 ****** 2025-01-16 15:27:09.750960 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:27:09.866335 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:27:09.866595 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:27:09.866618 | orchestrator | 2025-01-16 15:27:09.866630 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-01-16 15:27:10.192846 | orchestrator | Thursday 16 January 2025 15:27:09 +0000 (0:00:00.234) 0:00:00.372 ****** 2025-01-16 15:27:10.193101 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-01-16 15:27:10.193272 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-01-16 15:27:10.193303 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-01-16 15:27:10.193330 | orchestrator | 2025-01-16 15:27:10.193807 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-01-16 15:27:10.193907 | orchestrator | 2025-01-16 15:27:10.193921 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-01-16 15:27:10.194118 | orchestrator | Thursday 16 January 2025 15:27:10 +0000 (0:00:00.326) 0:00:00.699 ****** 2025-01-16 15:27:10.434930 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-01-16 15:27:10.438343 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-01-16 15:27:10.438385 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-01-16 15:27:10.438391 | orchestrator | 2025-01-16 15:27:10.438403 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-01-16 15:27:10.800804 | orchestrator | Thursday 16 January 2025 15:27:10 +0000 (0:00:00.240) 0:00:00.939 ****** 2025-01-16 15:27:10.800963 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-01-16 15:27:10.801219 | orchestrator | 2025-01-16 15:27:10.801237 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-01-16 15:27:10.801250 | orchestrator | Thursday 16 January 2025 15:27:10 +0000 (0:00:00.366) 0:00:01.306 ****** 2025-01-16 15:27:12.958250 | orchestrator | ok: [testbed-node-1] 2025-01-16 15:27:12.958387 | orchestrator | ok: [testbed-node-0] 2025-01-16 15:27:12.958438 | orchestrator | ok: [testbed-node-2] 2025-01-16 15:27:12.958449 | orchestrator | 2025-01-16 15:27:12.958896 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-01-16 15:27:12.958924 | orchestrator | Thursday 16 January 2025 15:27:12 +0000 (0:00:02.157) 0:00:03.464 ****** 2025-01-16 15:27:27.217720 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-01-16 15:27:27.272857 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-01-16 15:27:27.273014 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-01-16 15:27:27.273029 | orchestrator | mariadb_bootstrap_restart 2025-01-16 15:27:27.273050 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:27:27.273238 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:27:27.273251 | orchestrator | changed: [testbed-node-0] 2025-01-16 15:27:27.273264 | orchestrator | 2025-01-16 15:27:27.273587 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-01-16 15:27:27.273776 | orchestrator | skipping: no hosts matched 2025-01-16 15:27:27.273991 | orchestrator | 2025-01-16 15:27:27.274206 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-01-16 15:27:27.274423 | orchestrator | skipping: no hosts matched 2025-01-16 15:27:27.277274 | orchestrator | 2025-01-16 15:27:27.443892 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-01-16 15:27:27.444046 | orchestrator | skipping: no hosts matched 2025-01-16 15:27:27.444152 | orchestrator | 2025-01-16 15:27:27.444182 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-01-16 15:27:27.444206 | orchestrator | 2025-01-16 15:27:27.444230 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-01-16 15:27:27.444254 | orchestrator | Thursday 16 January 2025 15:27:27 +0000 (0:00:14.316) 0:00:17.780 ****** 2025-01-16 15:27:27.444299 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:27:27.512115 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:27:27.512254 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:27:27.512265 | orchestrator | 2025-01-16 15:27:27.512278 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-01-16 15:27:27.627423 | orchestrator | Thursday 16 January 2025 15:27:27 +0000 (0:00:00.236) 0:00:18.017 ****** 2025-01-16 15:27:27.627610 | orchestrator | skipping: [testbed-node-0] 2025-01-16 15:27:27.644843 | orchestrator | skipping: [testbed-node-1] 2025-01-16 15:27:27.645019 | orchestrator | skipping: [testbed-node-2] 2025-01-16 15:27:27.645044 | orchestrator | 2025-01-16 15:27:27.645059 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:27:27.645074 | orchestrator | 2025-01-16 15:27:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 15:27:27.645088 | orchestrator | 2025-01-16 15:27:27 | INFO  | Please wait and do not abort execution. 2025-01-16 15:27:27.645109 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-01-16 15:27:27.645342 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 15:27:27.645430 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-01-16 15:27:27.645656 | orchestrator | 2025-01-16 15:27:27.645686 | orchestrator | 2025-01-16 15:27:27.645873 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:27:27.645995 | orchestrator | Thursday 16 January 2025 15:27:27 +0000 (0:00:00.135) 0:00:18.152 ****** 2025-01-16 15:27:27.646261 | orchestrator | =============================================================================== 2025-01-16 15:27:27.646369 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ----------- 14.32s 2025-01-16 15:27:27.646641 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.16s 2025-01-16 15:27:27.646734 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.37s 2025-01-16 15:27:27.646866 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2025-01-16 15:27:27.646892 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.24s 2025-01-16 15:27:27.646909 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.24s 2025-01-16 15:27:27.647295 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2025-01-16 15:27:27.910330 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.14s 2025-01-16 15:27:27.910457 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack-services.sh 2025-01-16 15:27:27.913244 | orchestrator | + set -e 2025-01-16 15:27:27.913401 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-01-16 15:27:27.913423 | orchestrator | ++ export INTERACTIVE=false 2025-01-16 15:27:27.913440 | orchestrator | ++ INTERACTIVE=false 2025-01-16 15:27:27.913455 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-01-16 15:27:27.913469 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-01-16 15:27:27.913511 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-01-16 15:27:27.913531 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-01-16 15:27:27.929047 | orchestrator | 2025-01-16 15:27:31.579893 | orchestrator | # OpenStack endpoints 2025-01-16 15:27:31.580045 | orchestrator | 2025-01-16 15:27:31.580075 | orchestrator | ++ export MANAGER_VERSION=latest 2025-01-16 15:27:31.580100 | orchestrator | ++ MANAGER_VERSION=latest 2025-01-16 15:27:31.580121 | orchestrator | + export OS_CLOUD=admin 2025-01-16 15:27:31.580142 | orchestrator | + OS_CLOUD=admin 2025-01-16 15:27:31.580163 | orchestrator | + echo 2025-01-16 15:27:31.580185 | orchestrator | + echo '# OpenStack endpoints' 2025-01-16 15:27:31.580206 | orchestrator | + echo 2025-01-16 15:27:31.580228 | orchestrator | + openstack endpoint list 2025-01-16 15:27:31.580269 | orchestrator | +----------------------------------+-----------+------------------+-------------------------+---------+-----------+---------------------------------------------------------------------+ 2025-01-16 15:27:31.580293 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-01-16 15:27:31.580315 | orchestrator | +----------------------------------+-----------+------------------+-------------------------+---------+-----------+---------------------------------------------------------------------+ 2025-01-16 15:27:31.580337 | orchestrator | | 09628ffb6f514902918c10d486394974 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-01-16 15:27:31.580429 | orchestrator | | 103929bd90884cbdae0f6711af834ba2 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-01-16 15:27:31.580456 | orchestrator | | 205a4593a42e4691a2e861f787650134 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-01-16 15:27:31.580506 | orchestrator | | 20829def046c4cd39dad4a0051b950e6 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-01-16 15:27:31.580530 | orchestrator | | 20ed710abaa74c56904f3a8303963689 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-01-16 15:27:31.580576 | orchestrator | | 23c425d28d224f8e8a5863cc4f77b3e4 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-01-16 15:27:31.580599 | orchestrator | | 2b7c8f004b7c4d1e837047aaad74dbb1 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-01-16 15:27:31.580620 | orchestrator | | 5a37dae405ae464d91a1d5f3898f19cd | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-01-16 15:27:31.580641 | orchestrator | | 63927256fffb4ac490159f86600b6e48 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-01-16 15:27:31.580664 | orchestrator | | 6e8b5650defc4a3ba775cdfae7cc2db3 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-01-16 15:27:31.580685 | orchestrator | | 715565df19624cf9af4c6dad66aa120d | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-01-16 15:27:31.580702 | orchestrator | | 73b642d9d1594669a2d55898db2bcbed | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-01-16 15:27:31.580716 | orchestrator | | 76f09ddf88e8451587cf00c4fb149f92 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-01-16 15:27:31.580730 | orchestrator | | 79bac74c5b484dffa4b8395a73e7df97 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-01-16 15:27:31.580743 | orchestrator | | a155d86c3add4ae0b18a833103623b9d | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-01-16 15:27:31.580757 | orchestrator | | a1b8d3b1ad0f4dccacc052fbdff70c9c | RegionOne | ironic | baremetal | True | internal | https://api-int.testbed.osism.xyz:6385 | 2025-01-16 15:27:31.580771 | orchestrator | | b66e7bff4d9c4338aaab8fa4bd2661df | RegionOne | ironic | baremetal | True | public | https://api.testbed.osism.xyz:6385 | 2025-01-16 15:27:31.580784 | orchestrator | | b9b9ef3d990f4d3d9aff7a424e64c434 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-01-16 15:27:31.580796 | orchestrator | | bfed7ba005e94e8ea408539a88a55659 | RegionOne | ironic-inspector | baremetal-introspection | True | public | https://api.testbed.osism.xyz:5050 | 2025-01-16 15:27:31.580827 | orchestrator | | d5a6b56f7db44e1f8543c97dcb30f2d6 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-01-16 15:27:31.689548 | orchestrator | | e41a71c046444a0daf7a106d9072e165 | RegionOne | ironic-inspector | baremetal-introspection | True | internal | https://api-int.testbed.osism.xyz:5050 | 2025-01-16 15:27:31.689663 | orchestrator | | e83b8c9db4644fbca95cbc96d84d3698 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-01-16 15:27:31.689679 | orchestrator | | f0b4a1468e95425f910d538b3b475aa9 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-01-16 15:27:31.689692 | orchestrator | | f0eb641e412d4932a28d9366ed2617d0 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-01-16 15:27:31.689731 | orchestrator | | f15f0bbef2eb456f8aefc7a55394f8e4 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-01-16 15:27:31.689743 | orchestrator | | f279c740db9a45b39cc074111678450b | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-01-16 15:27:31.689755 | orchestrator | +----------------------------------+-----------+------------------+-------------------------+---------+-----------+---------------------------------------------------------------------+ 2025-01-16 15:27:31.689781 | orchestrator | 2025-01-16 15:27:33.825465 | orchestrator | # Cinder 2025-01-16 15:27:33.825593 | orchestrator | 2025-01-16 15:27:33.825614 | orchestrator | + echo 2025-01-16 15:27:33.825630 | orchestrator | + echo '# Cinder' 2025-01-16 15:27:33.825646 | orchestrator | + echo 2025-01-16 15:27:33.825662 | orchestrator | + openstack volume service list 2025-01-16 15:27:33.825712 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-01-16 15:27:33.956612 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-01-16 15:27:33.956761 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-01-16 15:27:33.956793 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-01-16T15:27:29.000000 | 2025-01-16 15:27:33.956841 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-01-16T15:27:28.000000 | 2025-01-16 15:27:33.956868 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-01-16T15:27:29.000000 | 2025-01-16 15:27:33.956893 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-01-16T15:27:28.000000 | 2025-01-16 15:27:33.956916 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-01-16T15:27:29.000000 | 2025-01-16 15:27:33.956939 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-01-16T15:27:29.000000 | 2025-01-16 15:27:33.956962 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-01-16T15:27:27.000000 | 2025-01-16 15:27:33.957000 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-01-16T15:27:27.000000 | 2025-01-16 15:27:33.957022 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-01-16T15:27:27.000000 | 2025-01-16 15:27:33.957045 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-01-16 15:27:33.957091 | orchestrator | 2025-01-16 15:27:36.288925 | orchestrator | # Neutron 2025-01-16 15:27:36.289051 | orchestrator | 2025-01-16 15:27:36.289071 | orchestrator | + echo 2025-01-16 15:27:36.289087 | orchestrator | + echo '# Neutron' 2025-01-16 15:27:36.289103 | orchestrator | + echo 2025-01-16 15:27:36.289118 | orchestrator | + openstack network agent list 2025-01-16 15:27:36.289151 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-01-16 15:27:36.413245 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-01-16 15:27:36.413359 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-01-16 15:27:36.413377 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-01-16 15:27:36.413391 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-01-16 15:27:36.413405 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-01-16 15:27:36.413445 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-01-16 15:27:36.413460 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-01-16 15:27:36.413516 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-01-16 15:27:36.413532 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-01-16 15:27:36.413546 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-01-16 15:27:36.413560 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-01-16 15:27:36.413574 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-01-16 15:27:36.413622 | orchestrator | + openstack network service provider list 2025-01-16 15:27:38.018597 | orchestrator | +---------------+------+---------+ 2025-01-16 15:27:38.143753 | orchestrator | | Service Type | Name | Default | 2025-01-16 15:27:38.143832 | orchestrator | +---------------+------+---------+ 2025-01-16 15:27:38.143841 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-01-16 15:27:38.143848 | orchestrator | +---------------+------+---------+ 2025-01-16 15:27:38.143866 | orchestrator | 2025-01-16 15:27:40.214979 | orchestrator | # Nova 2025-01-16 15:27:40.215085 | orchestrator | 2025-01-16 15:27:40.215101 | orchestrator | + echo 2025-01-16 15:27:40.215112 | orchestrator | + echo '# Nova' 2025-01-16 15:27:40.215122 | orchestrator | + echo 2025-01-16 15:27:40.215133 | orchestrator | + openstack compute service list 2025-01-16 15:27:40.215162 | orchestrator | +--------------------------------------+----------------+-----------------------+----------+---------+-------+----------------------------+ 2025-01-16 15:27:40.337820 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-01-16 15:27:40.338084 | orchestrator | +--------------------------------------+----------------+-----------------------+----------+---------+-------+----------------------------+ 2025-01-16 15:27:40.338106 | orchestrator | | 9034f051-0cde-4a02-b73b-f0365a1008fb | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-01-16T15:27:40.000000 | 2025-01-16 15:27:40.338122 | orchestrator | | a789fe36-28e4-4279-b9f6-8011fa2bfab9 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-01-16T15:27:34.000000 | 2025-01-16 15:27:40.338137 | orchestrator | | 46d5d575-31b4-4638-aaac-5b16579184d8 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-01-16T15:27:35.000000 | 2025-01-16 15:27:40.338151 | orchestrator | | b54141c8-8033-4485-b972-da6ebd9f789f | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-01-16T15:27:37.000000 | 2025-01-16 15:27:40.338165 | orchestrator | | 48b6a84a-f453-4a08-9504-bc1c4234c0aa | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-01-16T15:27:37.000000 | 2025-01-16 15:27:40.338190 | orchestrator | | 4fa3ea88-443e-4f2d-b2f5-29668e36c2de | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-01-16T15:27:37.000000 | 2025-01-16 15:27:40.338204 | orchestrator | | 5eac691d-2c83-4168-b323-39655f5c2688 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-01-16T15:27:34.000000 | 2025-01-16 15:27:40.338218 | orchestrator | | 86afc53e-cbf1-4464-88c3-6c2cc093947f | nova-compute | testbed-node-4 | nova | enabled | up | 2025-01-16T15:27:34.000000 | 2025-01-16 15:27:40.338232 | orchestrator | | 45e97084-fbfb-45ae-9aec-0fdba539f2a1 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-01-16T15:27:34.000000 | 2025-01-16 15:27:40.338246 | orchestrator | | ebcc42bc-19c8-43a6-9a64-47fbf91b12b4 | nova-compute | testbed-node-0-ironic | nova | enabled | up | 2025-01-16T15:27:31.000000 | 2025-01-16 15:27:40.338284 | orchestrator | | ea639336-ffeb-44e4-b102-0d49edd57141 | nova-compute | testbed-node-1-ironic | nova | enabled | up | 2025-01-16T15:27:32.000000 | 2025-01-16 15:27:40.338298 | orchestrator | | 6a7c2613-0354-4208-8d82-ba27dab34ec6 | nova-compute | testbed-node-2-ironic | nova | enabled | up | 2025-01-16T15:27:32.000000 | 2025-01-16 15:27:40.338312 | orchestrator | +--------------------------------------+----------------+-----------------------+----------+---------+-------+----------------------------+ 2025-01-16 15:27:40.338342 | orchestrator | + openstack hypervisor list 2025-01-16 15:27:42.552820 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-01-16 15:27:42.676195 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-01-16 15:27:42.676277 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-01-16 15:27:42.676284 | orchestrator | | ed95ab97-4954-4bff-9326-b6f8ca3a12e5 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-01-16 15:27:42.676290 | orchestrator | | 4816086b-94ef-487d-82dc-14b82e87cf0b | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-01-16 15:27:42.676295 | orchestrator | | 21ab222e-5b79-461d-a3be-cb892dfa0881 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-01-16 15:27:42.676301 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-01-16 15:27:42.676317 | orchestrator | 2025-01-16 15:27:43.620988 | orchestrator | # Run OpenStack test play 2025-01-16 15:27:43.621105 | orchestrator | 2025-01-16 15:27:43.621124 | orchestrator | + echo 2025-01-16 15:27:43.621152 | orchestrator | + echo '# Run OpenStack test play' 2025-01-16 15:27:43.621169 | orchestrator | + echo 2025-01-16 15:27:43.621183 | orchestrator | + osism apply --environment openstack test 2025-01-16 15:27:43.621225 | orchestrator | 2025-01-16 15:27:43 | INFO  | Trying to run play test in environment openstack 2025-01-16 15:27:43.651695 | orchestrator | 2025-01-16 15:27:43 | INFO  | Task 9c22f0b1-7f18-4532-89fc-da095b99e160 (test) was prepared for execution. 2025-01-16 15:27:46.847802 | orchestrator | 2025-01-16 15:27:43 | INFO  | It takes a moment until task 9c22f0b1-7f18-4532-89fc-da095b99e160 (test) has been started and output is visible here. 2025-01-16 15:27:46.847979 | orchestrator | 2025-01-16 15:27:49.761314 | orchestrator | PLAY [Create test project] ***************************************************** 2025-01-16 15:27:49.761426 | orchestrator | 2025-01-16 15:27:49.761437 | orchestrator | TASK [Create test domain] ****************************************************** 2025-01-16 15:27:49.761446 | orchestrator | Thursday 16 January 2025 15:27:46 +0000 (0:00:00.884) 0:00:00.884 ****** 2025-01-16 15:27:49.761468 | orchestrator | changed: [localhost] 2025-01-16 15:27:52.845844 | orchestrator | 2025-01-16 15:27:52.845950 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-01-16 15:27:52.845963 | orchestrator | Thursday 16 January 2025 15:27:49 +0000 (0:00:02.913) 0:00:03.797 ****** 2025-01-16 15:27:52.846000 | orchestrator | changed: [localhost] 2025-01-16 15:27:56.661660 | orchestrator | 2025-01-16 15:27:56.661768 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-01-16 15:27:56.661782 | orchestrator | Thursday 16 January 2025 15:27:52 +0000 (0:00:03.084) 0:00:06.882 ****** 2025-01-16 15:27:56.661803 | orchestrator | changed: [localhost] 2025-01-16 15:27:59.565932 | orchestrator | 2025-01-16 15:27:59.566110 | orchestrator | TASK [Create test project] ***************************************************** 2025-01-16 15:27:59.566127 | orchestrator | Thursday 16 January 2025 15:27:56 +0000 (0:00:03.815) 0:00:10.698 ****** 2025-01-16 15:27:59.566151 | orchestrator | changed: [localhost] 2025-01-16 15:28:02.642141 | orchestrator | 2025-01-16 15:28:02.642315 | orchestrator | TASK [Create test user] ******************************************************** 2025-01-16 15:28:02.642352 | orchestrator | Thursday 16 January 2025 15:27:59 +0000 (0:00:02.903) 0:00:13.601 ****** 2025-01-16 15:28:02.642402 | orchestrator | changed: [localhost] 2025-01-16 15:28:09.682393 | orchestrator | 2025-01-16 15:28:09.682563 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-01-16 15:28:09.682586 | orchestrator | Thursday 16 January 2025 15:28:02 +0000 (0:00:03.076) 0:00:16.678 ****** 2025-01-16 15:28:09.682620 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-01-16 15:28:13.150541 | orchestrator | changed: [localhost] => (item=member) 2025-01-16 15:28:13.150665 | orchestrator | changed: [localhost] => (item=creator) 2025-01-16 15:28:13.150692 | orchestrator | 2025-01-16 15:28:13.150714 | orchestrator | TASK [Create test server group] ************************************************ 2025-01-16 15:28:13.150735 | orchestrator | Thursday 16 January 2025 15:28:09 +0000 (0:00:07.039) 0:00:23.718 ****** 2025-01-16 15:28:13.150773 | orchestrator | changed: [localhost] 2025-01-16 15:28:16.798525 | orchestrator | 2025-01-16 15:28:16.798668 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-01-16 15:28:16.798686 | orchestrator | Thursday 16 January 2025 15:28:13 +0000 (0:00:03.465) 0:00:27.184 ****** 2025-01-16 15:28:16.798707 | orchestrator | changed: [localhost] 2025-01-16 15:28:19.876981 | orchestrator | 2025-01-16 15:28:19.877143 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-01-16 15:28:19.877159 | orchestrator | Thursday 16 January 2025 15:28:16 +0000 (0:00:03.650) 0:00:30.834 ****** 2025-01-16 15:28:19.877182 | orchestrator | changed: [localhost] 2025-01-16 15:28:22.737805 | orchestrator | 2025-01-16 15:28:22.737883 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-01-16 15:28:22.737892 | orchestrator | Thursday 16 January 2025 15:28:19 +0000 (0:00:03.079) 0:00:33.914 ****** 2025-01-16 15:28:22.737907 | orchestrator | changed: [localhost] 2025-01-16 15:28:25.561670 | orchestrator | 2025-01-16 15:28:25.561750 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-01-16 15:28:25.561758 | orchestrator | Thursday 16 January 2025 15:28:22 +0000 (0:00:02.849) 0:00:36.763 ****** 2025-01-16 15:28:25.561774 | orchestrator | changed: [localhost] 2025-01-16 15:28:25.561873 | orchestrator | 2025-01-16 15:28:28.460210 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-01-16 15:28:28.460306 | orchestrator | Thursday 16 January 2025 15:28:25 +0000 (0:00:02.835) 0:00:39.599 ****** 2025-01-16 15:28:28.460335 | orchestrator | changed: [localhost] 2025-01-16 15:28:36.863669 | orchestrator | 2025-01-16 15:28:36.863803 | orchestrator | TASK [Create test network topology] ******************************************** 2025-01-16 15:28:36.863827 | orchestrator | Thursday 16 January 2025 15:28:28 +0000 (0:00:02.897) 0:00:42.496 ****** 2025-01-16 15:28:36.863862 | orchestrator | changed: [localhost] 2025-01-16 15:30:15.809814 | orchestrator | 2025-01-16 15:30:15.810229 | orchestrator | TASK [Create test instances] *************************************************** 2025-01-16 15:30:15.810275 | orchestrator | Thursday 16 January 2025 15:28:36 +0000 (0:00:08.402) 0:00:50.899 ****** 2025-01-16 15:30:15.810317 | orchestrator | changed: [localhost] => (item=test) 2025-01-16 15:30:28.889357 | orchestrator | changed: [localhost] => (item=test-1) 2025-01-16 15:30:28.889541 | orchestrator | changed: [localhost] => (item=test-2) 2025-01-16 15:30:28.889564 | orchestrator | changed: [localhost] => (item=test-3) 2025-01-16 15:30:28.889580 | orchestrator | changed: [localhost] => (item=test-4) 2025-01-16 15:30:28.889595 | orchestrator | 2025-01-16 15:30:28.889610 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-01-16 15:30:28.889626 | orchestrator | Thursday 16 January 2025 15:30:15 +0000 (0:01:38.945) 0:02:29.844 ****** 2025-01-16 15:30:28.889658 | orchestrator | changed: [localhost] => (item=test) 2025-01-16 15:30:45.605801 | orchestrator | changed: [localhost] => (item=test-1) 2025-01-16 15:30:45.605952 | orchestrator | changed: [localhost] => (item=test-2) 2025-01-16 15:30:45.605971 | orchestrator | changed: [localhost] => (item=test-3) 2025-01-16 15:30:45.605987 | orchestrator | changed: [localhost] => (item=test-4) 2025-01-16 15:30:45.606002 | orchestrator | 2025-01-16 15:30:45.606080 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-01-16 15:30:45.606125 | orchestrator | Thursday 16 January 2025 15:30:28 +0000 (0:00:13.078) 0:02:42.923 ****** 2025-01-16 15:30:45.606159 | orchestrator | changed: [localhost] => (item=test) 2025-01-16 15:30:51.262971 | orchestrator | changed: [localhost] => (item=test-1) 2025-01-16 15:30:51.263098 | orchestrator | changed: [localhost] => (item=test-2) 2025-01-16 15:30:51.263119 | orchestrator | changed: [localhost] => (item=test-3) 2025-01-16 15:30:51.263134 | orchestrator | changed: [localhost] => (item=test-4) 2025-01-16 15:30:51.263150 | orchestrator | 2025-01-16 15:30:51.263162 | orchestrator | TASK [Create test volume] ****************************************************** 2025-01-16 15:30:51.263173 | orchestrator | Thursday 16 January 2025 15:30:45 +0000 (0:00:16.715) 0:02:59.638 ****** 2025-01-16 15:30:51.263197 | orchestrator | changed: [localhost] 2025-01-16 15:30:56.918937 | orchestrator | 2025-01-16 15:30:56.919097 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-01-16 15:30:56.919127 | orchestrator | Thursday 16 January 2025 15:30:51 +0000 (0:00:05.660) 0:03:05.299 ****** 2025-01-16 15:30:56.919192 | orchestrator | changed: [localhost] 2025-01-16 15:31:00.495311 | orchestrator | 2025-01-16 15:31:00.495406 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-01-16 15:31:00.495415 | orchestrator | Thursday 16 January 2025 15:30:56 +0000 (0:00:05.654) 0:03:10.953 ****** 2025-01-16 15:31:00.495434 | orchestrator | ok: [localhost] 2025-01-16 15:31:01.490919 | orchestrator | 2025-01-16 15:31:01.491153 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-01-16 15:31:01.491184 | orchestrator | Thursday 16 January 2025 15:31:00 +0000 (0:00:03.579) 0:03:14.532 ****** 2025-01-16 15:31:01.491218 | orchestrator | ok: [localhost] => { 2025-01-16 15:31:01.491652 | orchestrator |  "msg": "192.168.112.141" 2025-01-16 15:31:01.491784 | orchestrator | } 2025-01-16 15:31:01.491814 | orchestrator | 2025-01-16 15:31:01.491850 | orchestrator | PLAY RECAP ********************************************************************* 2025-01-16 15:31:01.492061 | orchestrator | 2025-01-16 15:31:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-01-16 15:31:01.492097 | orchestrator | 2025-01-16 15:31:01 | INFO  | Please wait and do not abort execution. 2025-01-16 15:31:01.492123 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-01-16 15:31:01.492245 | orchestrator | 2025-01-16 15:31:01.492807 | orchestrator | 2025-01-16 15:31:01.492861 | orchestrator | TASKS RECAP ******************************************************************** 2025-01-16 15:31:01.493075 | orchestrator | Thursday 16 January 2025 15:31:01 +0000 (0:00:00.994) 0:03:15.527 ****** 2025-01-16 15:31:01.493589 | orchestrator | =============================================================================== 2025-01-16 15:31:01.493907 | orchestrator | Create test instances -------------------------------------------------- 98.95s 2025-01-16 15:31:01.494169 | orchestrator | Add tag to instances --------------------------------------------------- 16.72s 2025-01-16 15:31:01.494626 | orchestrator | Add metadata to instances ---------------------------------------------- 13.08s 2025-01-16 15:31:01.494880 | orchestrator | Create test network topology -------------------------------------------- 8.40s 2025-01-16 15:31:01.494919 | orchestrator | Add member roles to user test ------------------------------------------- 7.04s 2025-01-16 15:31:01.495064 | orchestrator | Create test volume ------------------------------------------------------ 5.66s 2025-01-16 15:31:01.495274 | orchestrator | Attach test volume ------------------------------------------------------ 5.65s 2025-01-16 15:31:01.495606 | orchestrator | Add manager role to user test-admin ------------------------------------- 3.82s 2025-01-16 15:31:01.495750 | orchestrator | Create ssh security group ----------------------------------------------- 3.65s 2025-01-16 15:31:01.495949 | orchestrator | Create floating ip address ---------------------------------------------- 3.58s 2025-01-16 15:31:01.496147 | orchestrator | Create test server group ------------------------------------------------ 3.47s 2025-01-16 15:31:01.496517 | orchestrator | Create test-admin user -------------------------------------------------- 3.08s 2025-01-16 15:31:01.497080 | orchestrator | Add rule to ssh security group ------------------------------------------ 3.08s 2025-01-16 15:31:01.497276 | orchestrator | Create test user -------------------------------------------------------- 3.08s 2025-01-16 15:31:01.497617 | orchestrator | Create test domain ------------------------------------------------------ 2.91s 2025-01-16 15:31:01.497711 | orchestrator | Create test project ----------------------------------------------------- 2.90s 2025-01-16 15:31:01.497960 | orchestrator | Create test keypair ----------------------------------------------------- 2.90s 2025-01-16 15:31:01.498136 | orchestrator | Create icmp security group ---------------------------------------------- 2.85s 2025-01-16 15:31:01.498539 | orchestrator | Add rule to icmp security group ----------------------------------------- 2.84s 2025-01-16 15:31:01.705924 | orchestrator | Print floating ip address ----------------------------------------------- 0.99s 2025-01-16 15:31:01.706073 | orchestrator | + server_list 2025-01-16 15:31:03.810002 | orchestrator | + openstack --os-cloud test server list 2025-01-16 15:31:03.810223 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-01-16 15:31:03.935840 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-01-16 15:31:03.935955 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-01-16 15:31:03.935974 | orchestrator | | dd3b100f-6059-4426-a97f-90cf2470cce2 | test-4 | ACTIVE | auto_allocated_network=10.42.0.37, 192.168.112.108 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-01-16 15:31:03.935990 | orchestrator | | dbd191a0-433a-4d50-9866-f0f1ed81410b | test-3 | ACTIVE | auto_allocated_network=10.42.0.34, 192.168.112.155 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-01-16 15:31:03.936004 | orchestrator | | 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 | test-2 | ACTIVE | auto_allocated_network=10.42.0.48, 192.168.112.133 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-01-16 15:31:03.936018 | orchestrator | | 0eaabdc1-f0ee-432c-b079-4c47374ebd5d | test-1 | ACTIVE | auto_allocated_network=10.42.0.52, 192.168.112.110 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-01-16 15:31:03.936032 | orchestrator | | 927c9486-a77c-4f20-a267-53d3be34a67b | test | ACTIVE | auto_allocated_network=10.42.0.5, 192.168.112.141 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-01-16 15:31:03.936046 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-01-16 15:31:03.936078 | orchestrator | + openstack --os-cloud test server show test 2025-01-16 15:31:06.380184 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:06.380341 | orchestrator | | Field | Value | 2025-01-16 15:31:06.380377 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:06.380403 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-01-16 15:31:06.380441 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-01-16 15:31:06.380456 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-01-16 15:31:06.380503 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-01-16 15:31:06.380519 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-01-16 15:31:06.380533 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-01-16 15:31:06.380547 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-01-16 15:31:06.380561 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-01-16 15:31:06.380592 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-01-16 15:31:06.380608 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-01-16 15:31:06.380622 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-01-16 15:31:06.380636 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-01-16 15:31:06.380657 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-01-16 15:31:06.380672 | orchestrator | | OS-EXT-STS:task_state | None | 2025-01-16 15:31:06.380689 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-01-16 15:31:06.380705 | orchestrator | | OS-SRV-USG:launched_at | 2025-01-16T15:28:49.000000 | 2025-01-16 15:31:06.380721 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-01-16 15:31:06.380737 | orchestrator | | accessIPv4 | | 2025-01-16 15:31:06.380757 | orchestrator | | accessIPv6 | | 2025-01-16 15:31:06.380773 | orchestrator | | addresses | auto_allocated_network=10.42.0.5, 192.168.112.141 | 2025-01-16 15:31:06.380796 | orchestrator | | config_drive | | 2025-01-16 15:31:06.380812 | orchestrator | | created | 2025-01-16T15:28:41Z | 2025-01-16 15:31:06.380828 | orchestrator | | description | None | 2025-01-16 15:31:06.380851 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-01-16 15:31:06.380868 | orchestrator | | hostId | 47a618a2028cf0d40980ee79f74205f19c0672cf2e249faf2b5c6406 | 2025-01-16 15:31:06.380884 | orchestrator | | host_status | None | 2025-01-16 15:31:06.380900 | orchestrator | | id | 927c9486-a77c-4f20-a267-53d3be34a67b | 2025-01-16 15:31:06.380916 | orchestrator | | image | Cirros 0.6.2 (14099668-64f0-49d5-a4fb-7dd90e41f3e9) | 2025-01-16 15:31:06.380932 | orchestrator | | key_name | test | 2025-01-16 15:31:06.380955 | orchestrator | | locked | False | 2025-01-16 15:31:06.380972 | orchestrator | | locked_reason | None | 2025-01-16 15:31:06.380988 | orchestrator | | name | test | 2025-01-16 15:31:06.381010 | orchestrator | | progress | 0 | 2025-01-16 15:31:06.381027 | orchestrator | | project_id | b39cff342c08484ca7ddf30a2ab68ce8 | 2025-01-16 15:31:06.381049 | orchestrator | | properties | hostname='test' | 2025-01-16 15:31:06.381063 | orchestrator | | security_groups | name='ssh' | 2025-01-16 15:31:06.381077 | orchestrator | | | name='icmp' | 2025-01-16 15:31:06.381092 | orchestrator | | server_groups | ['56c3337a-6ba9-4e80-95b0-42c895187691'] | 2025-01-16 15:31:06.381106 | orchestrator | | status | ACTIVE | 2025-01-16 15:31:06.381124 | orchestrator | | tags | test | 2025-01-16 15:31:06.381139 | orchestrator | | trusted_image_certificates | None | 2025-01-16 15:31:06.381153 | orchestrator | | updated | 2025-01-16T15:30:18Z | 2025-01-16 15:31:06.381167 | orchestrator | | user_id | 34cf5d8e0d594fb5adaa03d6212b5cfd | 2025-01-16 15:31:06.381189 | orchestrator | | volumes_attached | delete_on_termination='False', id='3112c4e7-6ac6-4e34-9a65-4060356b1c8e' | 2025-01-16 15:31:06.381938 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:06.516452 | orchestrator | + openstack --os-cloud test server show test-1 2025-01-16 15:31:08.609051 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:08.609173 | orchestrator | | Field | Value | 2025-01-16 15:31:08.609198 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:08.609219 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-01-16 15:31:08.609239 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-01-16 15:31:08.609276 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-01-16 15:31:08.609297 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-01-16 15:31:08.609317 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-01-16 15:31:08.609366 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-01-16 15:31:08.609387 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-01-16 15:31:08.609436 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-01-16 15:31:08.609542 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-01-16 15:31:08.609567 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-01-16 15:31:08.609588 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-01-16 15:31:08.609608 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-01-16 15:31:08.609635 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-01-16 15:31:08.609656 | orchestrator | | OS-EXT-STS:task_state | None | 2025-01-16 15:31:08.609678 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-01-16 15:31:08.609698 | orchestrator | | OS-SRV-USG:launched_at | 2025-01-16T15:29:08.000000 | 2025-01-16 15:31:08.609716 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-01-16 15:31:08.609737 | orchestrator | | accessIPv4 | | 2025-01-16 15:31:08.609776 | orchestrator | | accessIPv6 | | 2025-01-16 15:31:08.609797 | orchestrator | | addresses | auto_allocated_network=10.42.0.52, 192.168.112.110 | 2025-01-16 15:31:08.609827 | orchestrator | | config_drive | | 2025-01-16 15:31:08.609848 | orchestrator | | created | 2025-01-16T15:29:00Z | 2025-01-16 15:31:08.609870 | orchestrator | | description | None | 2025-01-16 15:31:08.609896 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-01-16 15:31:08.609915 | orchestrator | | hostId | 19cf8a237a7d12dd3269d723b308b423b57c934a7abdec9236793ad1 | 2025-01-16 15:31:08.609934 | orchestrator | | host_status | None | 2025-01-16 15:31:08.609952 | orchestrator | | id | 0eaabdc1-f0ee-432c-b079-4c47374ebd5d | 2025-01-16 15:31:08.609970 | orchestrator | | image | Cirros 0.6.2 (14099668-64f0-49d5-a4fb-7dd90e41f3e9) | 2025-01-16 15:31:08.609989 | orchestrator | | key_name | test | 2025-01-16 15:31:08.610014 | orchestrator | | locked | False | 2025-01-16 15:31:08.610138 | orchestrator | | locked_reason | None | 2025-01-16 15:31:08.610159 | orchestrator | | name | test-1 | 2025-01-16 15:31:08.610194 | orchestrator | | progress | 0 | 2025-01-16 15:31:08.610216 | orchestrator | | project_id | b39cff342c08484ca7ddf30a2ab68ce8 | 2025-01-16 15:31:08.610236 | orchestrator | | properties | hostname='test-1' | 2025-01-16 15:31:08.610257 | orchestrator | | security_groups | name='ssh' | 2025-01-16 15:31:08.610276 | orchestrator | | | name='icmp' | 2025-01-16 15:31:08.610297 | orchestrator | | server_groups | ['56c3337a-6ba9-4e80-95b0-42c895187691'] | 2025-01-16 15:31:08.610318 | orchestrator | | status | ACTIVE | 2025-01-16 15:31:08.610348 | orchestrator | | tags | test | 2025-01-16 15:31:08.610369 | orchestrator | | trusted_image_certificates | None | 2025-01-16 15:31:08.610389 | orchestrator | | updated | 2025-01-16T15:30:20Z | 2025-01-16 15:31:08.610409 | orchestrator | | user_id | 34cf5d8e0d594fb5adaa03d6212b5cfd | 2025-01-16 15:31:08.610435 | orchestrator | | volumes_attached | | 2025-01-16 15:31:08.734300 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:08.734417 | orchestrator | + openstack --os-cloud test server show test-2 2025-01-16 15:31:11.084498 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:11.084597 | orchestrator | | Field | Value | 2025-01-16 15:31:11.084611 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:11.084621 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-01-16 15:31:11.084630 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-01-16 15:31:11.084657 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-01-16 15:31:11.084666 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-01-16 15:31:11.084675 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-01-16 15:31:11.084684 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-01-16 15:31:11.084704 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-01-16 15:31:11.084713 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-01-16 15:31:11.084731 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-01-16 15:31:11.084741 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-01-16 15:31:11.084749 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-01-16 15:31:11.084758 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-01-16 15:31:11.084767 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-01-16 15:31:11.084780 | orchestrator | | OS-EXT-STS:task_state | None | 2025-01-16 15:31:11.084789 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-01-16 15:31:11.084798 | orchestrator | | OS-SRV-USG:launched_at | 2025-01-16T15:29:30.000000 | 2025-01-16 15:31:11.084807 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-01-16 15:31:11.084819 | orchestrator | | accessIPv4 | | 2025-01-16 15:31:11.084828 | orchestrator | | accessIPv6 | | 2025-01-16 15:31:11.084837 | orchestrator | | addresses | auto_allocated_network=10.42.0.48, 192.168.112.133 | 2025-01-16 15:31:11.084849 | orchestrator | | config_drive | | 2025-01-16 15:31:11.084859 | orchestrator | | created | 2025-01-16T15:29:23Z | 2025-01-16 15:31:11.084867 | orchestrator | | description | None | 2025-01-16 15:31:11.084880 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-01-16 15:31:11.084889 | orchestrator | | hostId | 91a798aebdf16f1aa35f6d21be66be89e98f0ebaa7a83c6061ffd23a | 2025-01-16 15:31:11.084898 | orchestrator | | host_status | None | 2025-01-16 15:31:11.084906 | orchestrator | | id | 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 | 2025-01-16 15:31:11.084918 | orchestrator | | image | Cirros 0.6.2 (14099668-64f0-49d5-a4fb-7dd90e41f3e9) | 2025-01-16 15:31:11.084927 | orchestrator | | key_name | test | 2025-01-16 15:31:11.084936 | orchestrator | | locked | False | 2025-01-16 15:31:11.084944 | orchestrator | | locked_reason | None | 2025-01-16 15:31:11.084953 | orchestrator | | name | test-2 | 2025-01-16 15:31:11.084965 | orchestrator | | progress | 0 | 2025-01-16 15:31:11.084975 | orchestrator | | project_id | b39cff342c08484ca7ddf30a2ab68ce8 | 2025-01-16 15:31:11.084988 | orchestrator | | properties | hostname='test-2' | 2025-01-16 15:31:11.084997 | orchestrator | | security_groups | name='ssh' | 2025-01-16 15:31:11.085007 | orchestrator | | | name='icmp' | 2025-01-16 15:31:11.085017 | orchestrator | | server_groups | ['56c3337a-6ba9-4e80-95b0-42c895187691'] | 2025-01-16 15:31:11.085030 | orchestrator | | status | ACTIVE | 2025-01-16 15:31:11.085040 | orchestrator | | tags | test | 2025-01-16 15:31:11.085050 | orchestrator | | trusted_image_certificates | None | 2025-01-16 15:31:11.085059 | orchestrator | | updated | 2025-01-16T15:30:23Z | 2025-01-16 15:31:11.085069 | orchestrator | | user_id | 34cf5d8e0d594fb5adaa03d6212b5cfd | 2025-01-16 15:31:11.085082 | orchestrator | | volumes_attached | | 2025-01-16 15:31:11.086821 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:11.214099 | orchestrator | + openstack --os-cloud test server show test-3 2025-01-16 15:31:13.301151 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:13.301293 | orchestrator | | Field | Value | 2025-01-16 15:31:13.301322 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:13.301375 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-01-16 15:31:13.301399 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-01-16 15:31:13.301421 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-01-16 15:31:13.301444 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-01-16 15:31:13.301506 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-01-16 15:31:13.301531 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-01-16 15:31:13.301554 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-01-16 15:31:13.301577 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-01-16 15:31:13.301649 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-01-16 15:31:13.301674 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-01-16 15:31:13.301698 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-01-16 15:31:13.301733 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-01-16 15:31:13.301757 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-01-16 15:31:13.301782 | orchestrator | | OS-EXT-STS:task_state | None | 2025-01-16 15:31:13.301803 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-01-16 15:31:13.301825 | orchestrator | | OS-SRV-USG:launched_at | 2025-01-16T15:29:49.000000 | 2025-01-16 15:31:13.301848 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-01-16 15:31:13.301872 | orchestrator | | accessIPv4 | | 2025-01-16 15:31:13.301912 | orchestrator | | accessIPv6 | | 2025-01-16 15:31:13.301937 | orchestrator | | addresses | auto_allocated_network=10.42.0.34, 192.168.112.155 | 2025-01-16 15:31:13.301973 | orchestrator | | config_drive | | 2025-01-16 15:31:13.302006 | orchestrator | | created | 2025-01-16T15:29:45Z | 2025-01-16 15:31:13.302111 | orchestrator | | description | None | 2025-01-16 15:31:13.302138 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-01-16 15:31:13.302175 | orchestrator | | hostId | 19cf8a237a7d12dd3269d723b308b423b57c934a7abdec9236793ad1 | 2025-01-16 15:31:13.302196 | orchestrator | | host_status | None | 2025-01-16 15:31:13.302218 | orchestrator | | id | dbd191a0-433a-4d50-9866-f0f1ed81410b | 2025-01-16 15:31:13.302240 | orchestrator | | image | Cirros 0.6.2 (14099668-64f0-49d5-a4fb-7dd90e41f3e9) | 2025-01-16 15:31:13.302262 | orchestrator | | key_name | test | 2025-01-16 15:31:13.302297 | orchestrator | | locked | False | 2025-01-16 15:31:13.302319 | orchestrator | | locked_reason | None | 2025-01-16 15:31:13.302341 | orchestrator | | name | test-3 | 2025-01-16 15:31:13.302381 | orchestrator | | progress | 0 | 2025-01-16 15:31:13.302403 | orchestrator | | project_id | b39cff342c08484ca7ddf30a2ab68ce8 | 2025-01-16 15:31:13.302425 | orchestrator | | properties | hostname='test-3' | 2025-01-16 15:31:13.302447 | orchestrator | | security_groups | name='ssh' | 2025-01-16 15:31:13.302501 | orchestrator | | | name='icmp' | 2025-01-16 15:31:13.302525 | orchestrator | | server_groups | ['56c3337a-6ba9-4e80-95b0-42c895187691'] | 2025-01-16 15:31:13.302548 | orchestrator | | status | ACTIVE | 2025-01-16 15:31:13.302571 | orchestrator | | tags | test | 2025-01-16 15:31:13.302602 | orchestrator | | trusted_image_certificates | None | 2025-01-16 15:31:13.302625 | orchestrator | | updated | 2025-01-16T15:30:25Z | 2025-01-16 15:31:13.302653 | orchestrator | | user_id | 34cf5d8e0d594fb5adaa03d6212b5cfd | 2025-01-16 15:31:13.302683 | orchestrator | | volumes_attached | | 2025-01-16 15:31:13.427907 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:13.428036 | orchestrator | + openstack --os-cloud test server show test-4 2025-01-16 15:31:15.443754 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:15.443867 | orchestrator | | Field | Value | 2025-01-16 15:31:15.443886 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:15.443901 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-01-16 15:31:15.443915 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-01-16 15:31:15.443930 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-01-16 15:31:15.443969 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-01-16 15:31:15.443984 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-01-16 15:31:15.444012 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-01-16 15:31:15.444027 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-01-16 15:31:15.444041 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-01-16 15:31:15.444067 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-01-16 15:31:15.444082 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-01-16 15:31:15.444096 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-01-16 15:31:15.444111 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-01-16 15:31:15.444125 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-01-16 15:31:15.444147 | orchestrator | | OS-EXT-STS:task_state | None | 2025-01-16 15:31:15.444161 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-01-16 15:31:15.444180 | orchestrator | | OS-SRV-USG:launched_at | 2025-01-16T15:30:05.000000 | 2025-01-16 15:31:15.444195 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-01-16 15:31:15.444209 | orchestrator | | accessIPv4 | | 2025-01-16 15:31:15.444224 | orchestrator | | accessIPv6 | | 2025-01-16 15:31:15.444238 | orchestrator | | addresses | auto_allocated_network=10.42.0.37, 192.168.112.108 | 2025-01-16 15:31:15.444258 | orchestrator | | config_drive | | 2025-01-16 15:31:15.444274 | orchestrator | | created | 2025-01-16T15:30:01Z | 2025-01-16 15:31:15.444291 | orchestrator | | description | None | 2025-01-16 15:31:15.444306 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-01-16 15:31:15.444329 | orchestrator | | hostId | 47a618a2028cf0d40980ee79f74205f19c0672cf2e249faf2b5c6406 | 2025-01-16 15:31:15.444345 | orchestrator | | host_status | None | 2025-01-16 15:31:15.444366 | orchestrator | | id | dd3b100f-6059-4426-a97f-90cf2470cce2 | 2025-01-16 15:31:15.444382 | orchestrator | | image | Cirros 0.6.2 (14099668-64f0-49d5-a4fb-7dd90e41f3e9) | 2025-01-16 15:31:15.444397 | orchestrator | | key_name | test | 2025-01-16 15:31:15.444413 | orchestrator | | locked | False | 2025-01-16 15:31:15.444429 | orchestrator | | locked_reason | None | 2025-01-16 15:31:15.444445 | orchestrator | | name | test-4 | 2025-01-16 15:31:15.444495 | orchestrator | | progress | 0 | 2025-01-16 15:31:15.444512 | orchestrator | | project_id | b39cff342c08484ca7ddf30a2ab68ce8 | 2025-01-16 15:31:15.444528 | orchestrator | | properties | hostname='test-4' | 2025-01-16 15:31:15.444551 | orchestrator | | security_groups | name='ssh' | 2025-01-16 15:31:15.444571 | orchestrator | | | name='icmp' | 2025-01-16 15:31:15.444587 | orchestrator | | server_groups | ['56c3337a-6ba9-4e80-95b0-42c895187691'] | 2025-01-16 15:31:15.444603 | orchestrator | | status | ACTIVE | 2025-01-16 15:31:15.444618 | orchestrator | | tags | test | 2025-01-16 15:31:15.444634 | orchestrator | | trusted_image_certificates | None | 2025-01-16 15:31:15.444649 | orchestrator | | updated | 2025-01-16T15:30:28Z | 2025-01-16 15:31:15.444663 | orchestrator | | user_id | 34cf5d8e0d594fb5adaa03d6212b5cfd | 2025-01-16 15:31:15.444683 | orchestrator | | volumes_attached | | 2025-01-16 15:31:15.446720 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-01-16 15:31:15.576118 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-01-16 15:31:17.400423 | orchestrator | + compute_list 2025-01-16 15:31:17.400546 | orchestrator | + osism manage compute list testbed-node-3 2025-01-16 15:31:17.400566 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:17.532251 | orchestrator | | ID | Name | Status | 2025-01-16 15:31:17.532377 | orchestrator | |--------------------------------------+--------+----------| 2025-01-16 15:31:17.532396 | orchestrator | | dbd191a0-433a-4d50-9866-f0f1ed81410b | test-3 | ACTIVE | 2025-01-16 15:31:17.532411 | orchestrator | | 0eaabdc1-f0ee-432c-b079-4c47374ebd5d | test-1 | ACTIVE | 2025-01-16 15:31:17.532425 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:17.532456 | orchestrator | + osism manage compute list testbed-node-4 2025-01-16 15:31:19.374868 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:19.475665 | orchestrator | | ID | Name | Status | 2025-01-16 15:31:19.475766 | orchestrator | |--------------------------------------+--------+----------| 2025-01-16 15:31:19.475780 | orchestrator | | 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 | test-2 | ACTIVE | 2025-01-16 15:31:19.475791 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:19.475816 | orchestrator | + osism manage compute list testbed-node-5 2025-01-16 15:31:21.283885 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:21.399231 | orchestrator | | ID | Name | Status | 2025-01-16 15:31:21.399340 | orchestrator | |--------------------------------------+--------+----------| 2025-01-16 15:31:21.399358 | orchestrator | | dd3b100f-6059-4426-a97f-90cf2470cce2 | test-4 | ACTIVE | 2025-01-16 15:31:21.399376 | orchestrator | | 927c9486-a77c-4f20-a267-53d3be34a67b | test | ACTIVE | 2025-01-16 15:31:21.399393 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:21.399428 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-01-16 15:31:22.958958 | orchestrator | 2025-01-16 15:31:22 | INFO  | Live migrating server 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 2025-01-16 15:31:27.997625 | orchestrator | 2025-01-16 15:31:27 | INFO  | Live migration of 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 (test-2) is still in progress 2025-01-16 15:31:30.354330 | orchestrator | + compute_list 2025-01-16 15:31:31.994575 | orchestrator | + osism manage compute list testbed-node-3 2025-01-16 15:31:31.994709 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:32.103207 | orchestrator | | ID | Name | Status | 2025-01-16 15:31:32.103326 | orchestrator | |--------------------------------------+--------+----------| 2025-01-16 15:31:32.103345 | orchestrator | | dbd191a0-433a-4d50-9866-f0f1ed81410b | test-3 | ACTIVE | 2025-01-16 15:31:32.103360 | orchestrator | | 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 | test-2 | ACTIVE | 2025-01-16 15:31:32.103374 | orchestrator | | 0eaabdc1-f0ee-432c-b079-4c47374ebd5d | test-1 | ACTIVE | 2025-01-16 15:31:32.103388 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:32.103420 | orchestrator | + osism manage compute list testbed-node-4 2025-01-16 15:31:33.588736 | orchestrator | +------+--------+----------+ 2025-01-16 15:31:33.700404 | orchestrator | | ID | Name | Status | 2025-01-16 15:31:33.700553 | orchestrator | |------+--------+----------| 2025-01-16 15:31:33.700567 | orchestrator | +------+--------+----------+ 2025-01-16 15:31:33.700588 | orchestrator | + osism manage compute list testbed-node-5 2025-01-16 15:31:35.386713 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:35.489775 | orchestrator | | ID | Name | Status | 2025-01-16 15:31:35.489889 | orchestrator | |--------------------------------------+--------+----------| 2025-01-16 15:31:35.489907 | orchestrator | | dd3b100f-6059-4426-a97f-90cf2470cce2 | test-4 | ACTIVE | 2025-01-16 15:31:35.489922 | orchestrator | | 927c9486-a77c-4f20-a267-53d3be34a67b | test | ACTIVE | 2025-01-16 15:31:35.489936 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:35.489968 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-01-16 15:31:37.138149 | orchestrator | 2025-01-16 15:31:37 | INFO  | Live migrating server dd3b100f-6059-4426-a97f-90cf2470cce2 2025-01-16 15:31:41.794014 | orchestrator | 2025-01-16 15:31:41 | INFO  | Live migration of dd3b100f-6059-4426-a97f-90cf2470cce2 (test-4) is still in progress 2025-01-16 15:31:43.918856 | orchestrator | 2025-01-16 15:31:43 | INFO  | Live migrating server 927c9486-a77c-4f20-a267-53d3be34a67b 2025-01-16 15:31:48.813008 | orchestrator | 2025-01-16 15:31:48 | INFO  | Live migration of 927c9486-a77c-4f20-a267-53d3be34a67b (test) is still in progress 2025-01-16 15:31:50.940577 | orchestrator | 2025-01-16 15:31:50 | INFO  | Live migration of 927c9486-a77c-4f20-a267-53d3be34a67b (test) is still in progress 2025-01-16 15:31:53.263152 | orchestrator | + compute_list 2025-01-16 15:31:55.094186 | orchestrator | + osism manage compute list testbed-node-3 2025-01-16 15:31:55.094346 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:55.197432 | orchestrator | | ID | Name | Status | 2025-01-16 15:31:55.197607 | orchestrator | |--------------------------------------+--------+----------| 2025-01-16 15:31:55.197628 | orchestrator | | dd3b100f-6059-4426-a97f-90cf2470cce2 | test-4 | ACTIVE | 2025-01-16 15:31:55.197643 | orchestrator | | dbd191a0-433a-4d50-9866-f0f1ed81410b | test-3 | ACTIVE | 2025-01-16 15:31:55.197658 | orchestrator | | 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 | test-2 | ACTIVE | 2025-01-16 15:31:55.197672 | orchestrator | | 0eaabdc1-f0ee-432c-b079-4c47374ebd5d | test-1 | ACTIVE | 2025-01-16 15:31:55.197686 | orchestrator | | 927c9486-a77c-4f20-a267-53d3be34a67b | test | ACTIVE | 2025-01-16 15:31:55.197700 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:31:55.197731 | orchestrator | + osism manage compute list testbed-node-4 2025-01-16 15:31:56.727278 | orchestrator | +------+--------+----------+ 2025-01-16 15:31:56.825742 | orchestrator | | ID | Name | Status | 2025-01-16 15:31:56.825856 | orchestrator | |------+--------+----------| 2025-01-16 15:31:56.825873 | orchestrator | +------+--------+----------+ 2025-01-16 15:31:56.825902 | orchestrator | + osism manage compute list testbed-node-5 2025-01-16 15:31:58.346827 | orchestrator | +------+--------+----------+ 2025-01-16 15:31:58.449180 | orchestrator | | ID | Name | Status | 2025-01-16 15:31:58.449298 | orchestrator | |------+--------+----------| 2025-01-16 15:31:58.449327 | orchestrator | +------+--------+----------+ 2025-01-16 15:31:58.449372 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-01-16 15:32:00.072888 | orchestrator | 2025-01-16 15:32:00 | INFO  | Live migrating server dd3b100f-6059-4426-a97f-90cf2470cce2 2025-01-16 15:32:04.948165 | orchestrator | 2025-01-16 15:32:04 | INFO  | Live migration of dd3b100f-6059-4426-a97f-90cf2470cce2 (test-4) is still in progress 2025-01-16 15:32:07.136573 | orchestrator | 2025-01-16 15:32:07 | INFO  | Live migrating server dbd191a0-433a-4d50-9866-f0f1ed81410b 2025-01-16 15:32:10.860855 | orchestrator | 2025-01-16 15:32:10 | INFO  | Live migration of dbd191a0-433a-4d50-9866-f0f1ed81410b (test-3) is still in progress 2025-01-16 15:32:12.988070 | orchestrator | 2025-01-16 15:32:12 | INFO  | Live migrating server 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 2025-01-16 15:32:16.711512 | orchestrator | 2025-01-16 15:32:16 | INFO  | Live migration of 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 (test-2) is still in progress 2025-01-16 15:32:18.928270 | orchestrator | 2025-01-16 15:32:18 | INFO  | Live migrating server 0eaabdc1-f0ee-432c-b079-4c47374ebd5d 2025-01-16 15:32:23.501035 | orchestrator | 2025-01-16 15:32:23 | INFO  | Live migration of 0eaabdc1-f0ee-432c-b079-4c47374ebd5d (test-1) is still in progress 2025-01-16 15:32:25.683005 | orchestrator | 2025-01-16 15:32:25 | INFO  | Live migrating server 927c9486-a77c-4f20-a267-53d3be34a67b 2025-01-16 15:32:29.914868 | orchestrator | 2025-01-16 15:32:29 | INFO  | Live migration of 927c9486-a77c-4f20-a267-53d3be34a67b (test) is still in progress 2025-01-16 15:32:32.043994 | orchestrator | 2025-01-16 15:32:32 | INFO  | Live migration of 927c9486-a77c-4f20-a267-53d3be34a67b (test) is still in progress 2025-01-16 15:32:34.316374 | orchestrator | + compute_list 2025-01-16 15:32:35.953817 | orchestrator | + osism manage compute list testbed-node-3 2025-01-16 15:32:35.953946 | orchestrator | +------+--------+----------+ 2025-01-16 15:32:36.069010 | orchestrator | | ID | Name | Status | 2025-01-16 15:32:36.069139 | orchestrator | |------+--------+----------| 2025-01-16 15:32:36.069179 | orchestrator | +------+--------+----------+ 2025-01-16 15:32:36.069209 | orchestrator | + osism manage compute list testbed-node-4 2025-01-16 15:32:37.807807 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:32:37.903967 | orchestrator | | ID | Name | Status | 2025-01-16 15:32:37.904071 | orchestrator | |--------------------------------------+--------+----------| 2025-01-16 15:32:37.904085 | orchestrator | | dd3b100f-6059-4426-a97f-90cf2470cce2 | test-4 | ACTIVE | 2025-01-16 15:32:37.904096 | orchestrator | | dbd191a0-433a-4d50-9866-f0f1ed81410b | test-3 | ACTIVE | 2025-01-16 15:32:37.904107 | orchestrator | | 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 | test-2 | ACTIVE | 2025-01-16 15:32:37.904118 | orchestrator | | 0eaabdc1-f0ee-432c-b079-4c47374ebd5d | test-1 | ACTIVE | 2025-01-16 15:32:37.904128 | orchestrator | | 927c9486-a77c-4f20-a267-53d3be34a67b | test | ACTIVE | 2025-01-16 15:32:37.904138 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:32:37.904162 | orchestrator | + osism manage compute list testbed-node-5 2025-01-16 15:32:39.424893 | orchestrator | +------+--------+----------+ 2025-01-16 15:32:39.524695 | orchestrator | | ID | Name | Status | 2025-01-16 15:32:39.524774 | orchestrator | |------+--------+----------| 2025-01-16 15:32:39.524781 | orchestrator | +------+--------+----------+ 2025-01-16 15:32:39.524799 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-01-16 15:32:41.243364 | orchestrator | 2025-01-16 15:32:41 | INFO  | Live migrating server dd3b100f-6059-4426-a97f-90cf2470cce2 2025-01-16 15:32:44.737285 | orchestrator | 2025-01-16 15:32:44 | INFO  | Live migration of dd3b100f-6059-4426-a97f-90cf2470cce2 (test-4) is still in progress 2025-01-16 15:32:46.927819 | orchestrator | 2025-01-16 15:32:46 | INFO  | Live migrating server dbd191a0-433a-4d50-9866-f0f1ed81410b 2025-01-16 15:32:50.185756 | orchestrator | 2025-01-16 15:32:50 | INFO  | Live migration of dbd191a0-433a-4d50-9866-f0f1ed81410b (test-3) is still in progress 2025-01-16 15:32:52.372393 | orchestrator | 2025-01-16 15:32:52 | INFO  | Live migrating server 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 2025-01-16 15:32:55.669814 | orchestrator | 2025-01-16 15:32:55 | INFO  | Live migration of 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 (test-2) is still in progress 2025-01-16 15:32:57.791201 | orchestrator | 2025-01-16 15:32:57 | INFO  | Live migrating server 0eaabdc1-f0ee-432c-b079-4c47374ebd5d 2025-01-16 15:33:01.151895 | orchestrator | 2025-01-16 15:33:01 | INFO  | Live migration of 0eaabdc1-f0ee-432c-b079-4c47374ebd5d (test-1) is still in progress 2025-01-16 15:33:03.348940 | orchestrator | 2025-01-16 15:33:03 | INFO  | Live migrating server 927c9486-a77c-4f20-a267-53d3be34a67b 2025-01-16 15:33:06.571925 | orchestrator | 2025-01-16 15:33:06 | INFO  | Live migration of 927c9486-a77c-4f20-a267-53d3be34a67b (test) is still in progress 2025-01-16 15:33:08.702143 | orchestrator | 2025-01-16 15:33:08 | INFO  | Live migration of 927c9486-a77c-4f20-a267-53d3be34a67b (test) is still in progress 2025-01-16 15:33:11.001897 | orchestrator | + compute_list 2025-01-16 15:33:12.573182 | orchestrator | + osism manage compute list testbed-node-3 2025-01-16 15:33:12.573317 | orchestrator | +------+--------+----------+ 2025-01-16 15:33:12.674895 | orchestrator | | ID | Name | Status | 2025-01-16 15:33:12.675029 | orchestrator | |------+--------+----------| 2025-01-16 15:33:12.675058 | orchestrator | +------+--------+----------+ 2025-01-16 15:33:12.675100 | orchestrator | + osism manage compute list testbed-node-4 2025-01-16 15:33:14.189049 | orchestrator | +------+--------+----------+ 2025-01-16 15:33:14.299025 | orchestrator | | ID | Name | Status | 2025-01-16 15:33:14.299152 | orchestrator | |------+--------+----------| 2025-01-16 15:33:14.299171 | orchestrator | +------+--------+----------+ 2025-01-16 15:33:14.299217 | orchestrator | + osism manage compute list testbed-node-5 2025-01-16 15:33:16.143140 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:33:16.254696 | orchestrator | | ID | Name | Status | 2025-01-16 15:33:16.254796 | orchestrator | |--------------------------------------+--------+----------| 2025-01-16 15:33:16.254812 | orchestrator | | dd3b100f-6059-4426-a97f-90cf2470cce2 | test-4 | ACTIVE | 2025-01-16 15:33:16.254905 | orchestrator | | dbd191a0-433a-4d50-9866-f0f1ed81410b | test-3 | ACTIVE | 2025-01-16 15:33:16.254919 | orchestrator | | 1ab9b4fd-675a-4fed-934f-aa41fd4968c3 | test-2 | ACTIVE | 2025-01-16 15:33:16.254932 | orchestrator | | 0eaabdc1-f0ee-432c-b079-4c47374ebd5d | test-1 | ACTIVE | 2025-01-16 15:33:16.254948 | orchestrator | | 927c9486-a77c-4f20-a267-53d3be34a67b | test | ACTIVE | 2025-01-16 15:33:16.254970 | orchestrator | +--------------------------------------+--------+----------+ 2025-01-16 15:33:16.374983 | orchestrator | changed 2025-01-16 15:33:16.412073 | 2025-01-16 15:33:16.412186 | TASK [Run tempest] 2025-01-16 15:33:16.521239 | orchestrator | skipping: Conditional result was False 2025-01-16 15:33:16.541269 | 2025-01-16 15:33:16.541427 | TASK [Check prometheus alert status] 2025-01-16 15:33:16.651841 | orchestrator | skipping: Conditional result was False 2025-01-16 15:33:16.697753 | 2025-01-16 15:33:16.697852 | PLAY RECAP 2025-01-16 15:33:16.697946 | orchestrator | ok: 21 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-01-16 15:33:16.697975 | 2025-01-16 15:33:16.986841 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-01-16 15:33:16.989875 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-01-16 15:33:17.713986 | 2025-01-16 15:33:17.714156 | PLAY [Post output play] 2025-01-16 15:33:17.746085 | 2025-01-16 15:33:17.746223 | LOOP [stage-output : Register sources] 2025-01-16 15:33:17.827211 | 2025-01-16 15:33:17.827460 | TASK [stage-output : Check sudo] 2025-01-16 15:33:18.555413 | orchestrator | sudo: a password is required 2025-01-16 15:33:18.870427 | orchestrator | ok: Runtime: 0:00:00.009164 2025-01-16 15:33:18.888909 | 2025-01-16 15:33:18.889073 | LOOP [stage-output : Set source and destination for files and folders] 2025-01-16 15:33:18.932814 | 2025-01-16 15:33:18.933108 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-01-16 15:33:19.014513 | orchestrator | ok 2025-01-16 15:33:19.027511 | 2025-01-16 15:33:19.027684 | LOOP [stage-output : Ensure target folders exist] 2025-01-16 15:33:19.508891 | orchestrator | ok: "docs" 2025-01-16 15:33:19.509453 | 2025-01-16 15:33:19.781973 | orchestrator | ok: "artifacts" 2025-01-16 15:33:20.063876 | orchestrator | ok: "logs" 2025-01-16 15:33:20.090513 | 2025-01-16 15:33:20.090677 | LOOP [stage-output : Copy files and folders to staging folder] 2025-01-16 15:33:20.131214 | 2025-01-16 15:33:20.131482 | TASK [stage-output : Make all log files readable] 2025-01-16 15:33:20.481032 | orchestrator | ok 2025-01-16 15:33:20.491759 | 2025-01-16 15:33:20.491894 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-01-16 15:33:20.537806 | orchestrator | skipping: Conditional result was False 2025-01-16 15:33:20.553243 | 2025-01-16 15:33:20.553423 | TASK [stage-output : Discover log files for compression] 2025-01-16 15:33:20.578825 | orchestrator | skipping: Conditional result was False 2025-01-16 15:33:20.597332 | 2025-01-16 15:33:20.597486 | LOOP [stage-output : Archive everything from logs] 2025-01-16 15:33:20.686829 | 2025-01-16 15:33:20.686979 | PLAY [Post cleanup play] 2025-01-16 15:33:20.723667 | 2025-01-16 15:33:20.723829 | TASK [Set cloud fact (Zuul deployment)] 2025-01-16 15:33:20.785763 | orchestrator | ok 2025-01-16 15:33:20.798024 | 2025-01-16 15:33:20.798144 | TASK [Set cloud fact (local deployment)] 2025-01-16 15:33:20.832759 | orchestrator | skipping: Conditional result was False 2025-01-16 15:33:20.847684 | 2025-01-16 15:33:20.847806 | TASK [Clean the cloud environment] 2025-01-16 15:33:21.540947 | orchestrator | 2025-01-16 15:33:21 - clean up servers 2025-01-16 15:33:22.344937 | orchestrator | 2025-01-16 15:33:22 - testbed-manager 2025-01-16 15:33:22.450389 | orchestrator | 2025-01-16 15:33:22 - testbed-node-2 2025-01-16 15:33:22.537973 | orchestrator | 2025-01-16 15:33:22 - testbed-node-0 2025-01-16 15:33:22.634733 | orchestrator | 2025-01-16 15:33:22 - testbed-node-4 2025-01-16 15:33:22.724667 | orchestrator | 2025-01-16 15:33:22 - testbed-node-5 2025-01-16 15:33:22.831055 | orchestrator | 2025-01-16 15:33:22 - testbed-node-3 2025-01-16 15:33:22.935314 | orchestrator | 2025-01-16 15:33:22 - testbed-node-1 2025-01-16 15:33:23.029793 | orchestrator | 2025-01-16 15:33:23 - clean up keypairs 2025-01-16 15:33:23.047789 | orchestrator | 2025-01-16 15:33:23 - testbed 2025-01-16 15:33:23.080231 | orchestrator | 2025-01-16 15:33:23 - wait for servers to be gone 2025-01-16 15:33:27.725489 | orchestrator | 2025-01-16 15:33:27 - clean up ports 2025-01-16 15:33:27.859977 | orchestrator | 2025-01-16 15:33:27 - 07dfd23b-0cae-4ff4-bc2b-97abc3ab830e 2025-01-16 15:33:28.053553 | orchestrator | 2025-01-16 15:33:28 - 0b8f1ad3-fac4-4a00-ac2f-a962936826c4 2025-01-16 15:33:29.060887 | orchestrator | 2025-01-16 15:33:29 - 1c34293f-29c5-4250-ac78-2f28679bd3a7 2025-01-16 15:33:29.439914 | orchestrator | 2025-01-16 15:33:29 - 314d5586-9187-4875-bfcf-1a19ff4484a1 2025-01-16 15:33:29.630187 | orchestrator | 2025-01-16 15:33:29 - 38a9efc3-f26e-462a-9222-0aec05e6ee20 2025-01-16 15:33:29.814937 | orchestrator | 2025-01-16 15:33:29 - 9db6aea2-d862-4b7a-a967-afab6661c49e 2025-01-16 15:33:29.988247 | orchestrator | 2025-01-16 15:33:29 - d5584d47-b980-4cb8-aa0f-38fc0d92811f 2025-01-16 15:33:30.170611 | orchestrator | 2025-01-16 15:33:30 - clean up volumes 2025-01-16 15:33:30.323644 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-4-node-base 2025-01-16 15:33:30.361963 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-0-node-base 2025-01-16 15:33:30.406255 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-3-node-base 2025-01-16 15:33:30.451918 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-5-node-base 2025-01-16 15:33:30.497830 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-2-node-base 2025-01-16 15:33:30.541805 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-1-node-base 2025-01-16 15:33:30.587869 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-manager-base 2025-01-16 15:33:30.632723 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-9-node-3 2025-01-16 15:33:30.674748 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-1-node-1 2025-01-16 15:33:30.715863 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-0-node-0 2025-01-16 15:33:30.752037 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-13-node-1 2025-01-16 15:33:30.792065 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-14-node-2 2025-01-16 15:33:30.829289 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-8-node-2 2025-01-16 15:33:30.871035 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-11-node-5 2025-01-16 15:33:30.916951 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-12-node-0 2025-01-16 15:33:30.965529 | orchestrator | 2025-01-16 15:33:30 - testbed-volume-5-node-5 2025-01-16 15:33:31.010742 | orchestrator | 2025-01-16 15:33:31 - testbed-volume-16-node-4 2025-01-16 15:33:31.056525 | orchestrator | 2025-01-16 15:33:31 - testbed-volume-17-node-5 2025-01-16 15:33:31.103466 | orchestrator | 2025-01-16 15:33:31 - testbed-volume-4-node-4 2025-01-16 15:33:31.146867 | orchestrator | 2025-01-16 15:33:31 - testbed-volume-6-node-0 2025-01-16 15:33:31.194842 | orchestrator | 2025-01-16 15:33:31 - testbed-volume-3-node-3 2025-01-16 15:33:31.236665 | orchestrator | 2025-01-16 15:33:31 - testbed-volume-7-node-1 2025-01-16 15:33:31.280837 | orchestrator | 2025-01-16 15:33:31 - testbed-volume-10-node-4 2025-01-16 15:33:31.324593 | orchestrator | 2025-01-16 15:33:31 - testbed-volume-2-node-2 2025-01-16 15:33:31.369867 | orchestrator | 2025-01-16 15:33:31 - testbed-volume-15-node-3 2025-01-16 15:33:31.407690 | orchestrator | 2025-01-16 15:33:31 - disconnect routers 2025-01-16 15:33:31.473031 | orchestrator | 2025-01-16 15:33:31 - testbed 2025-01-16 15:33:32.215091 | orchestrator | 2025-01-16 15:33:32 - clean up subnets 2025-01-16 15:33:32.247559 | orchestrator | 2025-01-16 15:33:32 - subnet-testbed-management 2025-01-16 15:33:32.391161 | orchestrator | 2025-01-16 15:33:32 - clean up networks 2025-01-16 15:33:33.449762 | orchestrator | 2025-01-16 15:33:33 - net-testbed-management 2025-01-16 15:33:33.857573 | orchestrator | 2025-01-16 15:33:33 - clean up security groups 2025-01-16 15:33:33.900730 | orchestrator | 2025-01-16 15:33:33 - testbed-node 2025-01-16 15:33:33.996097 | orchestrator | 2025-01-16 15:33:33 - testbed-management 2025-01-16 15:33:34.162923 | orchestrator | 2025-01-16 15:33:34 - clean up floating ips 2025-01-16 15:33:34.196985 | orchestrator | 2025-01-16 15:33:34 - 81.163.193.54 2025-01-16 15:33:34.642778 | orchestrator | 2025-01-16 15:33:34 - clean up routers 2025-01-16 15:33:34.746842 | orchestrator | 2025-01-16 15:33:34 - testbed 2025-01-16 15:33:35.943146 | orchestrator | changed 2025-01-16 15:33:36.000468 | 2025-01-16 15:33:36.000609 | PLAY RECAP 2025-01-16 15:33:36.000667 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-01-16 15:33:36.000692 | 2025-01-16 15:33:36.118169 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-01-16 15:33:36.121201 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-01-16 15:33:36.828085 | 2025-01-16 15:33:36.828241 | PLAY [Base post-fetch] 2025-01-16 15:33:36.858342 | 2025-01-16 15:33:36.858491 | TASK [fetch-output : Set log path for multiple nodes] 2025-01-16 15:33:36.925359 | orchestrator | skipping: Conditional result was False 2025-01-16 15:33:36.943498 | 2025-01-16 15:33:36.943672 | TASK [fetch-output : Set log path for single node] 2025-01-16 15:33:36.997602 | orchestrator | ok 2025-01-16 15:33:37.007417 | 2025-01-16 15:33:37.007543 | LOOP [fetch-output : Ensure local output dirs] 2025-01-16 15:33:37.479710 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a5ad1afc3d714be699c9c1da1fba5829/work/logs" 2025-01-16 15:33:37.743180 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a5ad1afc3d714be699c9c1da1fba5829/work/artifacts" 2025-01-16 15:33:38.010907 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a5ad1afc3d714be699c9c1da1fba5829/work/docs" 2025-01-16 15:33:38.024763 | 2025-01-16 15:33:38.024887 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-01-16 15:33:38.835003 | orchestrator | changed: .d..t...... ./ 2025-01-16 15:33:38.835418 | orchestrator | changed: All items complete 2025-01-16 15:33:38.835491 | 2025-01-16 15:33:39.435154 | orchestrator | changed: .d..t...... ./ 2025-01-16 15:33:40.048405 | orchestrator | changed: .d..t...... ./ 2025-01-16 15:33:40.085081 | 2025-01-16 15:33:40.085220 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-01-16 15:33:40.130196 | orchestrator | skipping: Conditional result was False 2025-01-16 15:33:40.136710 | orchestrator | skipping: Conditional result was False 2025-01-16 15:33:40.191212 | 2025-01-16 15:33:40.191321 | PLAY RECAP 2025-01-16 15:33:40.191389 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-01-16 15:33:40.191418 | 2025-01-16 15:33:40.308860 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-01-16 15:33:40.317483 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-01-16 15:33:41.030216 | 2025-01-16 15:33:41.030363 | PLAY [Base post] 2025-01-16 15:33:41.058909 | 2025-01-16 15:33:41.059054 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-01-16 15:33:42.102130 | orchestrator | changed 2025-01-16 15:33:42.140087 | 2025-01-16 15:33:42.140214 | PLAY RECAP 2025-01-16 15:33:42.140280 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-01-16 15:33:42.140342 | 2025-01-16 15:33:42.250841 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-01-16 15:33:42.257842 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-01-16 15:33:43.018265 | 2025-01-16 15:33:43.018472 | PLAY [Base post-logs] 2025-01-16 15:33:43.034639 | 2025-01-16 15:33:43.034770 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-01-16 15:33:43.498559 | localhost | changed 2025-01-16 15:33:43.506171 | 2025-01-16 15:33:43.506447 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-01-16 15:33:43.541666 | localhost | ok 2025-01-16 15:33:43.552906 | 2025-01-16 15:33:43.553043 | TASK [Set zuul-log-path fact] 2025-01-16 15:33:43.572954 | localhost | ok 2025-01-16 15:33:43.593099 | 2025-01-16 15:33:43.593214 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-01-16 15:33:43.639718 | localhost | skipping: Conditional result was False 2025-01-16 15:33:43.648165 | 2025-01-16 15:33:43.648357 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-01-16 15:33:43.700987 | localhost | ok 2025-01-16 15:33:43.706510 | 2025-01-16 15:33:43.706666 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-01-16 15:33:43.753858 | localhost | skipping: Conditional result was False 2025-01-16 15:33:43.762465 | 2025-01-16 15:33:43.762690 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-01-16 15:33:43.791083 | localhost | skipping: Conditional result was False 2025-01-16 15:33:43.799166 | 2025-01-16 15:33:43.799347 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-01-16 15:33:43.825841 | localhost | skipping: Conditional result was False 2025-01-16 15:33:43.834170 | 2025-01-16 15:33:43.834405 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-01-16 15:33:43.861935 | localhost | skipping: Conditional result was False 2025-01-16 15:33:43.875033 | 2025-01-16 15:33:43.875191 | TASK [upload-logs : Create log directories] 2025-01-16 15:33:44.370716 | localhost | changed 2025-01-16 15:33:44.378825 | 2025-01-16 15:33:44.378974 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-01-16 15:33:44.902492 | localhost -> localhost | ok: Runtime: 0:00:00.007279 2025-01-16 15:33:44.913430 | 2025-01-16 15:33:44.913590 | TASK [upload-logs : Upload logs to log server] 2025-01-16 15:33:45.468278 | localhost | Output suppressed because no_log was given 2025-01-16 15:33:45.474481 | 2025-01-16 15:33:45.474639 | LOOP [upload-logs : Compress console log and json output] 2025-01-16 15:33:45.551691 | localhost | skipping: Conditional result was False 2025-01-16 15:33:45.568691 | localhost | skipping: Conditional result was False 2025-01-16 15:33:45.583182 | 2025-01-16 15:33:45.583432 | LOOP [upload-logs : Upload compressed console log and json output] 2025-01-16 15:33:45.645799 | localhost | skipping: Conditional result was False 2025-01-16 15:33:45.646436 | 2025-01-16 15:33:45.658156 | localhost | skipping: Conditional result was False 2025-01-16 15:33:45.669704 | 2025-01-16 15:33:45.670071 | LOOP [upload-logs : Upload console log and json output]